Labs, task#09 (NPB): it escaped my notice that (a) Phoronix ignores
$NUM_CPU_CORES and (b) MPI ignores
numactl. Therefore it is not possible to complete this task easily. To ensure NPB actually runs the correct amount of threads, it is necessary to add the following snippet to the NPB launcher in
if [ -n "$PEVA_CPU_CORES" ]; then NUM_THREADS="$PEVA_CPU_CORES" fi
and run the suite with
Credentials for GitLab were sent. Contact us if you have not received them.
The first labs (Feb 28) are cancelled.
Why Take the Course
The goal is to provide an overview of methods used for performance evaluation of software. We will focus on experimental methods.
The course can teach you:
- recognize suspicious measurement results; you can find those in commercial presentations, scientific publications or Master theses!
- to design and implement the measurement and evaluation infrastructure.
- to design and implement performance experiments and to know how to compare different products from the performance perspective.
You can find another benefit of the course in research, where it is required to verify properties of proposed solutions. Various performance experiments are essential for such verification. This course will teach you how to do that and you can use the knowledge in your Master thesis, for example.
Your MFF studies provided you with good basics in mathematics and statistics. You are also well-versed in computer system internals (architecture, operating system, etc). For computer systems performance evaluatation, both fields are necessary. This course will help you take advantage of your previous knowledge.
The course covers one part of the Master State Exams requirements - topic 2 (Embedded and Realtime Systems) of the plan Dependeble Systems for Software Systems, in particular "metrics of computer system performance and their statistical evaluation (metriky výkonnosti počitačových systémů a jejich statistické vyhodnocování)". It also extends the information on requirements "simulation and modelling of performance (simulace a modelování výkonnosti)" and "QPN (Queuing Petri Nets)", but these are covered by different courses in more detail.
There are no formal prerequisites. However, you should be familiar with basics of probability and statistics, the knowledge of operating systems, computer architecture and compiler principles is also advantageous. If you have passed the Operating Systems (NSWI004) and Methods of Mathematical Statistics (NMAI061), you are fine. If you have missed those courses, you should be at least a little interested in more system aspects of computers and programs and be willing to study the statistical methods we use along with the R system.
The course aims at Master program students, but the students of Bachelor program are also welcome. If you like the topic, you will be probably fine.
In the labs you will exercise the knowledge from lectures in a practical manner.
To get the credit (zápočet) you need to finish tasks given during the labs.
|March 14th||Agenda, timers||Materials|
|March 28th||Hardware counters||Materials|
|April 11th||Big benchmarks||Materials|
|April 25th||Instrumentation (Pin, DiSL)||Pin doc, Pin OS-API, DiSL doc, DiSL API, Materials
On lab blade, use Java from
|May 23rd||Complete performance experiment||TBB (concurrent_unordered_map), Collections (HashMap), Materials|
The exam will have two parts: practical and theoretical. In the practical one, you will be required to design and execute a performance evaluation experiment similar (but more complex) to one from the last lab assignment. You will be able to choose from several languages (at least C and Java will be available) and a framework or library to collect the data. During the exam you will explain your experiment and results.
The theoretical part will be testing your understanding and ability to use various methods and systems presented in the lectures.
This is a preliminary syllabus of the course. Minor changes are possible. From the following list, each point will be usually covered in a single lecture:
- General information. Goals and means we can use for performance evaluation.
- What to measure. Metrics.
- How to measure I - Theory. Clock, profiling, tracing, events.
- How to measure II - Practical 1. Timers, Counters.
- How to measure III - Practical 2. Instrumentation. Frameworks overview.
- How to process data I - Statistical tools. Means, distributions, ...
- How to process data II - Alternatives, confidence intervals, tests, ANOVA
- Graphical presentations - How to do good plots.
- What is going on? Data analysis, reading plots.
- Simulation. Modeling.
The slides presented in the lectures will be available here. You can also use many online resources, we will provide links in the lectures.
Most topics are covered to the detail in these great books:
- Jain, R.: The Art of Computer Systems Performance Evaluation. Wiley, New York 1991.
- Lilja, D. J.: Measuring Computer Performance: A Practitioner's Guide. Cambridge University Press, 2000.