The D3S Seminar is a regular meeting event of the department members and guest speakers. It is also a regular course Advanced Topics in Distributed and Component-Based Systems I, II (NSWI057, NSWI058). This course is recommended for Ph.D. and advanced graduate students.

Regular meetings take place on Tuesdays at 14:00 in S9 (if not noted otherwise in the schedule below).

We recommend subscribing to the seminar mailing list to receive announcements on special seminars, schedule updates and other important news.

Scheduled seminars

Tomáš Petříček

March 28, 2023 at 14:00 in S9

Cultures of Programming

Computer programming originated at the intersection of logic, art, electrical engineering, business and psychology. Those disciplines brought with themselves different ways of thinking about and doing programming. In the early days, the friction between cultures could have been just a sign of the immaturity of the field. 70 years later, the idea that a unified culture will emerge as the field matures is becoming difficult to believe. Different cultures keep their strong identity, but they interact. Interesting innovations appear and revealing discussions occur when multiple cultures meet or clash. In this talk, I will illustrate the interactions between different cultures of programming using three case studies from the history of programming: the history of formal reasoning about programs, the rise of interactive programming tools and the development of software engineering.


David Košťál

April 04, 2023 at 14:00 in S9

Insecurity Refactoring: Automated Injection of Vulnerabilities in Source Code (report on paper)


Passed seminars

Martin Blicha

March 14, 2023

Effective Automated Software Verification: A Multilayered Approach (Ph.D. Thesis Rehearsal)

Automated software verification decides correctness of a given program with respect to a given formal specification. Formal verification techniques based on model checking provide the necessary guarantees by exploring program’s behaviour exhaustively and automatically. Even though the general problem that automated software verification is trying to solve is undecidable, it is quite efficient on many instances that arise in practice. However, significant challenges related to scalability persist for complex modern programs. In our work, we argue that this task can be approached by providing solutions at different levels, which we identify as foundational, verification and cooperative layers of the problem. These correspond to decision and interpolation procedures, sequential model-checking algorithms, and multi-agent solving approaches. We further argue that working on the higher layers can significantly benefit from a deep understanding of the layers beneath them. Overall, we advance the field of automated software verification by contributing solutions on all three layers.


Andrej Pečimúth

February 28, 2023

Transformation Bisection Tool for the Graal Compiler (Review of the project)

The performance of generated machine code is closely linked to the applied optimizations. Conversely, compiler regressions often manifest as changes in the applied optimizations. We propose capturing optimization and inlining decisions performed by a compiler in each compilation unit. These decisions can be represented in the form of a tree. We introduce an approach utilizing tree matching to detect optimization differences in a semi-automated way. The presented techniques were employed to pinpoint the causes of performance problems in various benchmarks of the Graal compiler.


Michal Töpfer

December 06, 2022

Introducing Estimators—Abstraction for Easy ML Employment in Self-adaptive Architectures

In this talk, we will summarize our recent work in the area of using machine learning for self-adaptation of software systems. We focused on the task of providing application-friendly abstractions and tools that would allow the architects to focus on the application business logic rather than on the intricacies of integrating ML into the adaptation loop. We proposed ML-DEECo – an ensemble-based component model with Estimators, which can provide predictions on future and currently unobservable values in the self-adaptive system. The architect only needs to specify the inputs and outputs of the Estimator and the underlying ML model is trained automatically by our ML-DEECo runtime framework.


Milad Abdullah

November 29, 2022

Reducing Computation Costs in Performance Regression Detection

We will cover recent attempts to automatically detect performance regressions in benchmarking project such as GraalVM. The general aim is to reduce computation costs for benchmarking projects, by reducing number of benchmark runs which are not likely to provide useful information regarding existence of a performance regression.


Adam Šmelko

November 08, 2022

Noarr – C++ library for handling memory layouts and traversals of regular data structures


Jan Pacovský

October 25, 2022

Generalization of Machine-learning Adaptation in Ensemble-based Self-adaptive Systems


Filip Kliber

October 11, 2022

Fuzzing of Multithreaded Programs in .NET - Challenges and Solutions

Reasoning about programs using multiple computational threads is very difficult, because the behaviour of the program might depend on the specific interleaving of operations done by individual threads. Moreover some of the thread interleavings might exhibit a subtle bug in the application. This talk will demonstrate what challenges needs to be tackled when trying to control the thread interleaving of a .NET application in order to force some atomicity violations to manifest.


Michele Tucci

September 27, 2022

Code Coverage for Performance Testing

Test coverage measures what percentage of source code has been executed by a test suite, and it has been used for decades as a metric to assess the quality of tests. Despite its popularity in functional testing, the same quality criteria have seldom been applied to performance testing. Nonetheless, given the considerable costs associated with the design and execution of performance tests, metrics of code coverage could become crucial in the optimization of performance testing activities. In this talk, we will focus on how code coverage can be defined for performance testing, in a way that is compatible with current goals and practices in the field. We will introduce a method to compute code coverage in practice, and we will explore the challenges that arise when compared to functional testing.


Pavel Parízek

January 28, 2020

Rehearsal habilitation talk