Software performance tends to be easier to observe after the fact than to design up front. Focusing on the implementation phases of software development, we work on methods that help improve developer awareness of software performance.
Performance Unit Testing strives to construct specialized unit tests that validate performance. The obstacles we address include portable test construction and robust evaluation of noisy test results.
- We have introduced a formalism for describing performance test conditions in an ICPE 2012 paper [1].
- We have evaluated the use of performance unit testing in a software project in an EPEW 2013 paper [2].
- We are developing tool support for performance unit testing in Java.
Performance Documentation incorporates information on code performance into standard code documentation. Performance is collected from corresponding performance unit tests and displayed interactively in standard documentation.
- We have introduced the concept of performance documentation with a case study in an ICPE 2015 paper [3].
- We are developing tool support for performance documentation in Java.
Performance Unit Testing
Compared to functional unit testing, performance unit testing is more difficult, partially because correctness criteria are more difficult to express for performance than for functionality. Using the Stochastic Performance Logic (SPL) [1], we aim to express assertions on code performance in relative, hardware-independent terms. Using the performance unit testing tools, these assertions can be automatically validated [2], [5]. Besides performance unit testing, we experiment with using the performance unit tests to generate data for software documentation extended with performance information [3]. Other research includes incorporating performance awareness into adaptive applications [4].
Example
Stochastic Performance Logic (SPL) allows the developer to capture assumptions about performance of a code fragment. For example, to express that a proprietary sorting method is faster than the one provided by the Java Class Library, we can execute a test with the following annotation (somewhat simplified syntax):
@SPL("for n in 100, 200, 500, 1000 SELF(n) < java.util.Arrays#sort(n)")
public static void proprietarySort(int[] data) {
...
}
As another example, it is possible to also express than newer version of the method is faster, helping to prevent performance regressions:
@SPL("for n in 100, 200, 500, 1000 SELF@rev951(n) < SELF@rev840(n)")
public static void proprietarySort(int[] data) {
...
}
Complete examples are included with the tools.
Tools
The core tool is a command line utility to execute the performance unit tests. The utility scans a given project and locates and executes the tests specified using annotations. The results are provided in the form of an HTML report with a quick overview and a detailed information page for each test, including visualization of the measurements. The utility can cooperate with both Git and Subversion to fetch particular project versions used as baselines, it can also use SSH to execute the measurements on a remote host, which can help improve measurement stability.
In addition to the core command line utility, we also provide plugins for the Eclipse integrated development environment and the Hudson continuous integration server. The Eclipse plugin provides project configuration, annotation editor with content assist, and interactive results browser also for results from Hudson.
Tool | Link | Description |
---|---|---|
Documentation | User Guide, Developer Manual | |
Core tools | Packaged Releases, Source Repository | Core command line utility for executing performance unit tests. |
Eclipse plugin | Installation Guide, Update Site, Source Repository | Eclipse plugin for syntax highlighting and expression completion, unit test execution, report browsing. |
Hudson plugin | Packaged Releases, Source Repository | Hudson plugin for seamless performance unit test support in continuous integration. |
We also make available research prototypes under development, listed below.
Tool | Link | Description |
---|---|---|
Experimental SPL Implementation | Source Repository | Prototype implementation of the core tools for performance testing, with support for runtime data collection based on DiSL. |
Runtime Adaptation Prototype | Source Repository | An agent for runtime collection of performance data that can be used to control self-aware applications that adapt to their own performance \[2\]. |
Formula Evaluation Engine | Source Repository | A standalone module for evaluating SPL formulas that supports multiple interpretations. The module contains no probes, instead it operates on externally provided data. |
SPL Integration with JMH | Source Repository | A Maven plugin to ease integration of the JMH microbenchmarking harness with SPL formulaes. |
Experiments
We have performed a retroactive study where we have picked an existing software project, augmented it with tests and evaluated the results. Our project of choice was JDOM, a software package “for accessing, manipulating, and outputting XML data from Java code”. With about 15000 lines of code, it is of reasonable size for an experiment that requires frequent compilation and manual code inspection. With over 1500 commits spread across 13 years of development, it provided ample opportunity to observe performance regressions. JDOM is an open-source project with public source code repository. Details of the case study can be found in [2].
The table below contains the results from the measurements on three different machines.
Machine | OS & Java VM version | Test reports1 | Raw data2 |
---|---|---|---|
Intel Xeon (2.33GHz, 8GB) | Fedora 18 64bit, OpenJDK 1.7 | Browse on-line, ZIP archive | ZIP archive |
Intel Pentium 4 (2.2GHz, 512MB) | Fedora 18 32bit, OpenJDK 1.7 | Browse on-line, ZIP archive | ZIP archive |
Intel Atom (1.6GHz, 1GB) | Windows XP SP2 32bit, Oracle HotSpot 1.7 | Browse on-line, ZIP archive | ZIP archive |
1 Open index.html
in the ZIP archive to see the overview page of the
HTML report.
2 The data files are plain text, one execution time per line. The
first line of each file contains the measurement identification,
including machine, method, repository, revision, workload generator and
date. To process the data files in R, use
read.table("filename.dat")[,1]
, which gives access to the measured
values as a vector of execution times in nanoseconds.
Performance Documentation
To increase performance awareness in software development, we use the performance unit tests to enrich reference documentation with performance related information. To this end, our tools extend the standard Javadoc generator to include an interactive performance browser with each method that has a performance unit test. The tools execute measurements on the fly, gradually increasing the accuracy of presented results [3], [6].
Tools
Tool | Link | Description |
---|---|---|
Performance Documentation | Source Repository | An extension of the standard Javadoc generator with performance related information. |
Acknowledgments
This work has been supported by the EU project 257414 ASCENS, the GACR project P202/10/J042 FERDINAND, and by the Charles University institutional funding SVV-2012-265312 and SVV-2013-267312.
Contact
References
- L. Bulej, T. Bureš, J. Keznikl, A. Koubková, A. Podzimek, P. Tůma: Capturing Performance Assumptions Using Stochastic Performance Logic, in Proc. 3rd ACM/SPEC International Conference on Performance Engineering (ICPE), pp. 311–322, 2012, ISBN: 978-1-4503-1202-8, DOI: 10.1145/2188286.2188345
- V. Horký, F. Haas, J. Kotrč, M. Lacina, P. Tůma: Performance Regression Unit Testing: A Case Study, in Computer Performance Engineering: Proc. 10th European Performance Engineering Workshop (EPEW), pp. 149-163, 2013, ISBN: 978-3-642-40724-6 978-3-642-40725-3
- V. Horký, P. Libič, L. Marek, A. Steinhauser, P. Tůma: Utilizing Performance Unit Tests To Increase Performance Awareness, in Proc. 6th ACM/SPEC International Conference on Performance Engineering (ICPE), pp. 289–300, 2015, ISBN: 978-1-4503-3248-4, DOI: 10.1145/2668930.2688051
- L. Bulej, T. Bureš, V. Horký, J. Keznikl, P. Tůma: Performance Awareness in Component Systems (Vision Paper), in Proc. 36th IEEE Annual Computer Software and Applications Conference (COMPSAC) CORCS Workshop, pp. 514-519, 2012, DOI: 10.1109/COMPSACW.2012.96
- Tomáš Trojánek: Capturing Performance Assumptions using Stochastic Performance Logic, Master Thesis, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic, Charles University Thesis Repository Entry, 2012
- Jakub Náplava: PerfJavaDoc: Extending API Documentation With Performance Information, Bachelor Thesis, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic, Charles University Thesis Repository Entry, 2015