The D3S Seminar is a regular meeting event of the department members and guest speakers. It is also a regular course Advanced Topics in Distributed and Component-Based Systems I, II (NSWI057, NSWI058). This course is recommended for Ph.D. and advanced graduate students.

Regular meetings take place on Tuesdays at 14:00 in S9 (if not noted otherwise in the schedule below).

We recommend subscribing to the seminar mailing list to receive announcements on special seminars, schedule updates and other important news.

Scheduled seminars

No seminars are currently scheduled.

Passed seminars

Philipp Rümmer (University of Regensburg)

May 23, 2023

Black Ostrich: Web Application Scanning with String Solvers

Securing web applications remains a pressing challenge. Unfortunately, the state of the art in web crawling and security scanning still falls short of deep crawling. A major roadblock is the crawlers’ limited ability to pass input validation checks when web applications require data of a certain format, such as email, phone number, or zip code. This talk presents Black Ostrich, a principled approach to deep web crawling and scanning. The key idea is to equip web crawling with string constraint-solving capabilities to dynamically infer suitable inputs from regular expression patterns in web applications and thereby pass input validation checks. To enable this use of constraint solvers, we develop new automata-based techniques to handle complex real-world regular expressions, including support for the relevant features of ECMA JavaScript regular expressions. We implement our approach by extending and combining the Ostrich constraint solver with the Black Widow web crawler. Joint work with Benjamin Eriksson, Amanda Stjerna, Riccardo De Masellis, and Andrej Sabelfeld, to appear in the ACM Conference on Computer and Communications Security (CCS) 2023.

David Georg Reichelt (Lancaster University in Leipzig)

May 04, 2023

Examination of Performance Changes

Changes to the source code of software may result in varied performance. The detection and analysis require the specification of workloads, their measurement and the analysis of measurement results. The specification of workloads, which are able to detect performance regressions, requires immense manual effort. The talk presents the Peass approach (Performance analysis of software systems). Peass is based on the assumption, that performance changes can be identified by unit tests. Therefore, Peass consists of (1) a method for regression test selection, which determines between which commits the performance may have changed based on static code analysis and analysis of the runtime behaviour, (2) a method for transforming unit tests into performance tests and for statistically reliable and reproducible measurement of the performance and (3) a method for aiding the diagnosis of root causes of performance changes. The Peass approach thereby allows us to automatically examine performance changes that are measurable by the workload of unit tests.

View slides

Alexandru Iosup

April 26, 2023

Massivizing Computer Systems: VU on the Science, Design, and Engineering of Distributed Systems and Ecosystems

Wherever we look, our society is turning digital. Science and engineering, business-critical and economic operations, and online education and gaming rely increasingly on the effective digitalization of their processes. For digitalization to succeed, societal processes must leverage efficient computer systems, effectively and efficiently integrated into larger _ecosystems_, managed primarily without application developer and even client input. However successful until now, we cannot take these ecosystems for granted: the core does not yet rely on sound principles of science and design, and there are warning signs about the scalability, dependability, and sustainability of engineered operations. This is the challenge of massivizing computer systems. In this talk, inspired by this challenge and by our experience with distributed computer systems for over 15 years, we focus on understanding, deploying, scaling, and evolving computer ecosystems successfully. We posit we can achieve this through an ambitious, comprehensive research program, which starts from the idea that we can address the grand, fundamental challenge by focusing on computer ecosystems rather than merely on (individual, small-scale) computer systems. To this end, we define (distributed) computer ecosystems and differentiate them from (distributed) computer systems. We formulate principles and introduce a reference architecture for computer ecosystems supporting diverse workloads – AI/ML, big data and graph processing, online gaming and metaverse, and business-critical and serverless – and diverse resources and backend services across the computing continuum. We synthesize a framework of resource management and scheduling (RM&S) techniques, which we argue should be explored systematically in the next decade. We show early results obtained experimentally, through controlled real-world experiments, long-term observation, and what-if analysis of short- and long-term scenarios using the OpenDC digital twin for datacenters. This is a call to the entire community: there is much to discover and achieve, and to get meaningful, long-lasting results we need to form a community spanning distributed systems, performance engineering, software engineering, and more. Our joint work could lead to holistic improvements of applications, services, and processes, together with the computing infrastructure supporting them. This vision aligns with the Manifesto on Computer Systems and Networking Research in the Netherlands, which the speaker co-leads. Many of our examples come from real-world prototyping and experimentation, grand experiments in computer systems, and/or benchmarking and performance analysis work conducted with the Cloud group of SPEC RG.

View slides

Karthik Vaidhyanathan (International Institute of Information Technology​ – Hyderabad)

April 11, 2023

Engineering IoT systems through the lens of a smart city living lab

Many urban cities today are facing various challenges associated with sustainability, traffic congestion, crowd management, etc. Given this context, different government and private organizations have realized the importance of building smarter cities which can offer a way to address many of these challenges. Many cities worldwide have started utilizing technology extensively to offer different smart solutions, made possible by the Internet of Things (IoT) and related technologies. However, developing and maintaining IoT systems, in general, presents different challenges arising from interoperability (communication protocols), heterogeneity (types of devices), resource constraints, performance requirements etc. Further, it is also not feasible to test an IoT solution in a large-scale city-wide setup prior to deployment. To this end, the smart city living lab, an open-innovation ecosystem has been set up at IIITH, with support from the MEITY, Smart City Mission and Government of Telangana, India and in collaboration with the technology partners EBTC and Amsterdam Innovation Arena to discover & develop cutting edge innovations with smart city use cases and enrich them with the knowledge from research. The lab has the active deployment of more than 200 IoT nodes spanning different IoT verticals such as air quality, water quality and quantity, solar power, etc. These then serve as a test bed for various stakeholders to build and test their smart city solutions. In this talk, I will provide an overview of the Smart city living lab by elaborating on the stack used to handle some of the above-mentioned challenges including details on the IoT nodes deployed, the software stack used, the data pipeline and the visualization generated. Further, the talk will provide a glimpse into some of the existing challenges, and ongoing and future research plans for the smart city living lab.

View slides

David Košťál

April 04, 2023

Insecurity Refactoring: Automated Injection of Vulnerabilities in Source Code (report on paper)

Important part of software development is to make the code safe against possible attackers. One of possible attacks is SQL injection, where if a field is not sanitized correctly, attacker may inject code resulting in leaking information to the attacker or even enabling him to modify the database. This being said, it is important to teach programmers beginners how to recognize vulnerability to this type of attack and how to correctly fix it. For this purpose usually only small example programs are made where the vulnerability is pretty obvious and artifical. Presented solution for automatically making more complex learning examples, Insecurity refactoring, takes existing code and changes its internal structure to inject a vulnerability without changing the observable behavior in a normal use case scenario. Insecurity refactoring is achieved by creating an Adversary Controlled Input Dataflow tree, which is then used to find possible injection paths. The injected vulnerabilities stem from real Common vulnerabilities and Exposures (CVE) reports which makes the learning examples unique and realistic.

Tomáš Petříček

March 28, 2023

Cultures of Programming

Computer programming originated at the intersection of logic, art, electrical engineering, business and psychology. Those disciplines brought with themselves different ways of thinking about and doing programming. In the early days, the friction between cultures could have been just a sign of the immaturity of the field. 70 years later, the idea that a unified culture will emerge as the field matures is becoming difficult to believe. Different cultures keep their strong identity, but they interact. Interesting innovations appear and revealing discussions occur when multiple cultures meet or clash. In this talk, I will illustrate the interactions between different cultures of programming using three case studies from the history of programming: the history of formal reasoning about programs, the rise of interactive programming tools and the development of software engineering.

View slides

Martin Blicha

March 14, 2023

Effective Automated Software Verification: A Multilayered Approach (Ph.D. Thesis Rehearsal)

Automated software verification decides correctness of a given program with respect to a given formal specification. Formal verification techniques based on model checking provide the necessary guarantees by exploring program’s behaviour exhaustively and automatically. Even though the general problem that automated software verification is trying to solve is undecidable, it is quite efficient on many instances that arise in practice. However, significant challenges related to scalability persist for complex modern programs. In our work, we argue that this task can be approached by providing solutions at different levels, which we identify as foundational, verification and cooperative layers of the problem. These correspond to decision and interpolation procedures, sequential model-checking algorithms, and multi-agent solving approaches. We further argue that working on the higher layers can significantly benefit from a deep understanding of the layers beneath them. Overall, we advance the field of automated software verification by contributing solutions on all three layers.

View slides

Andrej Pečimúth

February 28, 2023

Transformation Bisection Tool for the Graal Compiler (Review of the project)

The performance of generated machine code is closely linked to the applied optimizations. Conversely, compiler regressions often manifest as changes in the applied optimizations. We propose capturing optimization and inlining decisions performed by a compiler in each compilation unit. These decisions can be represented in the form of a tree. We introduce an approach utilizing tree matching to detect optimization differences in a semi-automated way. The presented techniques were employed to pinpoint the causes of performance problems in various benchmarks of the Graal compiler.

Michal Töpfer

December 06, 2022

Introducing Estimators—Abstraction for Easy ML Employment in Self-adaptive Architectures

In this talk, we will summarize our recent work in the area of using machine learning for self-adaptation of software systems. We focused on the task of providing application-friendly abstractions and tools that would allow the architects to focus on the application business logic rather than on the intricacies of integrating ML into the adaptation loop. We proposed ML-DEECo – an ensemble-based component model with Estimators, which can provide predictions on future and currently unobservable values in the self-adaptive system. The architect only needs to specify the inputs and outputs of the Estimator and the underlying ML model is trained automatically by our ML-DEECo runtime framework.

Milad Abdullah

November 29, 2022

Reducing Computation Costs in Performance Regression Detection

We will cover recent attempts to automatically detect performance regressions in benchmarking project such as GraalVM. The general aim is to reduce computation costs for benchmarking projects, by reducing number of benchmark runs which are not likely to provide useful information regarding existence of a performance regression.

Adam Šmelko

November 08, 2022

Noarr – C++ library for handling memory layouts and traversals of regular data structures

Jan Pacovský

October 25, 2022

Generalization of Machine-learning Adaptation in Ensemble-based Self-adaptive Systems

Filip Kliber

October 11, 2022

Fuzzing of Multithreaded Programs in .NET - Challenges and Solutions

Reasoning about programs using multiple computational threads is very difficult, because the behaviour of the program might depend on the specific interleaving of operations done by individual threads. Moreover some of the thread interleavings might exhibit a subtle bug in the application. This talk will demonstrate what challenges needs to be tackled when trying to control the thread interleaving of a .NET application in order to force some atomicity violations to manifest.

Michele Tucci

September 27, 2022

Code Coverage for Performance Testing

Test coverage measures what percentage of source code has been executed by a test suite, and it has been used for decades as a metric to assess the quality of tests. Despite its popularity in functional testing, the same quality criteria have seldom been applied to performance testing. Nonetheless, given the considerable costs associated with the design and execution of performance tests, metrics of code coverage could become crucial in the optimization of performance testing activities. In this talk, we will focus on how code coverage can be defined for performance testing, in a way that is compatible with current goals and practices in the field. We will introduce a method to compute code coverage in practice, and we will explore the challenges that arise when compared to functional testing.

Pavel Parízek

January 28, 2020

Rehearsal habilitation talk