The 1st International Workshop on the Quality of Service-Oriented Software Systems (QUASOSS) brings together researchers and practitioners to discuss the state-of-art in modelling and evaluating the extra-functional properties of service-oriented software systems. These include for example performance, reliability, security, and energy consumption. While concepts for service design, composition, provisioning, and management are currently maturing, a systematic engineering approach to service-oriented software with predictable quality-of-service (QoS) is still missing. QUASOSS aims at closing this gap by stimulating research and technology transfer at the intersection of service-oriented computing and quality-of-service engineering.
The workshop is held in conjunction with the 7th ESEC/FSE Joint meeting, Amsterdam, The Netherlands.
We provide a forum for both researchers and practitioners aiming at fruitful discussions and searching for a way to exploit research results in industrial settings. The following topics are of particular relevance to QUASOSS:
This is a preliminary version of the workshop program.
Antonia Bertolino: Approaches to Testing Service-Oriented Software Systems
|10:30||Session I: Design Time: New Models from Measurements (Chair: Heiko Koziolek)|
|10:30||Peter Libic, Petr Tuma and Lubomir Bulej
Issues in Performance Modeling of Applications with Garbage Collection
|10:50||Philipp Reinecke, Sebastian Wittkowski and Katinka
Response Time Measurements using the Sun Java Adventure Builder
|11:10|| Jens Happe, Hui Li and Wolfgang Theilmann
Black-Box Performance Models: Prediction based on Observation
|11:30||Mini-Panel (Philipp, Jens, Peter)|
|14:00||Session II: Design Time: Using Models (Chair: Paul Pettersson)|
|14:00||Anne Martens and Franz Brosch
Optimising multiple quality criteria of service-oriented software architectures
|14:20||Christoph Rathfelder and Samuel Kounev
Modeling Event-Driven Service-Oriented Systems using the Palladio Component Model
|14:40||Mini-Panel (Anne, Christoph)|
|15:30||Session III: Runtime: Monitoring & Management (Chair: Petr Hnetynka)|
|15:30||Danilo Ardagna, Raffaela Mirandola, Marco Trubian
and Li Zhang
Run-time Resource Management in SOA Virtualized Environments
|15:50||Andreas Textor, Markus Schmid, Jan Schaefer and
SOA Monitoring Based on a Formal Workflow Model with Constraints
|16:10||Mini-Panel (Danilo, Andreas)|
|16:30||Panel Discussion: The Future of Quality and Performance Engineering for Service-Oriented Systems|
Abstract: The attractiveness and popularity of Service-Oriented Software Systems (SOSSs) stem from the growing availability of independent services that can be cost-effectively composed with other services to dynamically provide richer functionality. Service-orientation however poses new and difficult challenges to testers, especially when it comes to testing the interactions between heterogeneous, loosely coupled and independently developed services. Service integration testing requires discipline, standardized processes, and agreed policies to be put in place, which we referred to as SOA (Service Oriented Architecture) Test Governance (STG). Discovered services usually provide just a syntactical interface, enabling some general black-box tests, but insufficient to develop an adequate understanding of the integration quality between the interacting services. Besides, testing for the functional and extra-functional properties of a composite SOSS cannot generally rely on the ready or full availability, for testing purposes, of all invoked services (e.g., their usage might bring unwanted side effects). In this talk we will survey some of our recent results on SOSSs testing that span over the above needs. We will first discuss how the STG concept is implicit behind any approach to testing composite SOSSs and then give an overview of three different, complementary SOSS test approaches realizing different grades of STG, namely: the state-of-practice prototype tool WS-TAXI, for fully automatic generation of black-box test inputs; the novel SOCT approach allowing for test coverage measurement of independent services without loosing their implementation neutrality; the PUPPET tool for model-based generation of a testbed simulating the functional and extra-functional behavior of invoked external services.
The workshop solicits research statements introducing new open issues as well as papers describing on-going research. The expected length of the submissions is 4–8 pages ACM format. Each paper will be reviewed by 2–3 reviewers. Accepted papers will be published by ACM together with the proceedings of the ESEC/FSE 2009 meeting.