General Information (2016/2017)

Time and place: Wednesday, 12:20 in S4
Lecturers: Petr Tůma <tuma<at-sign>d3s.mff.cuni.cz>
Vojtěch Horký <horky<at-sign>d3s.mff.cuni.cz>
Labs: Wednesday, 14:00 in SU1, even weeks (see schedule below)
Official information (SIS): NSWI131

Why Take the Course

The goal is to provide an overview of methods used for performance evaluation of software. We will focus on experimental methods.

The course can teach you:

  • recognize suspicious measurement results; you can find those in comercial presentations, scientific publications or Master theses!
  • to design and implement the measurement and evaluation infrastructure.
  • to design and implement performance experiments and to know how to compare different products from the performance perspective.

You can find another benefit of the course in research, where it is required to verify properties of proposed solutions. Various performance experiments are essential for such verification. This course will teach you how to do that and you can use the knowledge in your Master thesis, for example.

Your MFF studies provided you with good basics in mathematics and statistics. You are also well-versed in computer system internals (architecture, operating system, etc). For computer systems performance evalutation, both fields are necessary. This course will help you take advantage of your previous knowledge.

The course covers one part of the Master State Exams requirements - topic 2 (Embedded and Realtime Systems) of the plan Dependeble Systems for Software Systems, in particular "metrics of computer system performace and their statistical evaluation (metriky výkonnosti počitačových systémů a jejich statistické vyhodnocování)". It also extends the information on requirements "simulation and modeling of performance (simulace a modelování výkonnosti)" and "QPN (Queuing Petri Nets)", but these are covered by different courses in more detail.

Prerequisites

There are no formal prerequisites. However, you should be familiar with basics of probability and statistics, the knowledge of operating systems, computer architecture and compiler principles is also advantageous. If you have passed the Operating Systems (NSWI004) and Methods of Mathematical Statistics (NMAI061), you are fine. If you have missed those courses, you should be at least a little interested in more system aspects of computers and programs and be willing to study the statistical methods we use along with the R system.

The course aims at Master program students, but the students of Bachelor program are also welcome. If you like the topic, you will be probably fine.

Labs

In the labs you will excercise the knowledge from lectures in a practical manner. The first lab will be introductory to show you basic measurement examples and data processing tool (R system). Four labs with one assignment in each will follow. Your task will be to finish the assignment in the lab and if you finish all four you will receive the credit. There will be option to redo one assignment (probably in the first week of examination period).

Task Date Topic Materials
  March 15 Measurement examples, computing with R Slides, example (data), example (sources)
1 March 29 Timers and counters Slides, task (sources), task (description)
2 April 12 Instrumentation (Pin, DiSL) Slides, task (sources), task (description), Pin doc, Pin OS-API, DiSL doc, DiSL API
3 May 10 Plotting and statistical tests Slides, task (sources), task (description)
4 May 24 Complete performance experiment Slides, task (sources), task (description)
June 6 Corrections for a failed task We will meet up at 10:00 in front of office 205, BYOD. Task description

Grading

The exam will have two parts: practical and theoretical. In the practical one, you will be required to desing and execute a performance evaluation experiment similar (but more complex) to one from the last lab assignment. You will be able to choose from several languages (at least C and Java will be available) and a framework or library to collect the data. During the exam you will explain your experiment and results.

The theoretical part will be testing your understanding and ability to use various methods and systems presented in the lectures.

Preliminary Syllabus

This is a preliminary syllabus of the course. Minor changes are possible. From the following list, each point will be usually covered in a single lecture:

  1. General information. Goals and means we can use for performance evaluation.
  2. What to measure. Metrics.
  3. How to measure I - Theory. Clock, profiling, tracing, events.
  4. How to measure II - Practical 1. Timers, Counters.
  5. How to measure III - Practical 2. Instrumentation. Frameworks overview.
  6. How to process data I - Statistical tools. Means, distributions, ...
  7. How to process data II - Alternatives, confidence intervals, tests, ANOVA
  8. Graphical presentations - How to do good plots.
  9. What is going on? Data analysis, reading plots.
  10. Simulation. Modeling.

Literature

The slides presented in the lectures will be available here. You can also use many online resources, we will provide links in the lectures.

Most topics are covered to the detail in these great books:

  • Jain, R.: The Art of Computer Systems Performance Evaluation. Wiley, New York 1991.
  • Lilja, D. J.: Measuring Computer Performance: A Practitioner's Guide. Cambridge University Press, 2000.

Other material:

Slides

Complete slides and some examples from 2014 are available here. During 2016, slides are being updated. The most current version of the slides (updated every week):

Logo of Faculty of Mathematics and Physics
  • Phone: +420 951 554 267, +420 951 554 236
  • Email: info<at-sign>d3s.mff.cuni.cz
  •  
  • How to find us?
Modified on 2017-06-06