This page describes how the on-site tests will look like and eventually will also contain description of the homework assignments. Please, have a look at the course guide page for details how the grading works.

Retaking on-site tests

If you were graded with Failed from one of the on-site tests, you can retake one during the examination period. There are already some schedules available in SIS (in the Exam dates module), so please enroll normally. You can enroll to any schedule (i.e. it doesn’t matter who is the examiner, it doesn’t have to be your teacher).

If you have exactly one Failed test, we know that this is the one you want to retake. If you failed multiple tests, we will ask you on the spot for which one you want to retake.

If your grade is Excused absence you are also allowed to retake that test (on top of one Failed retake). Again, you will be asked on the spot which test are you retaking. If you need to retake multiple tests, you will have to come to multiple exam dates (so you have 45 minutes for the assignment).

Retaking passed tests is not possible. Please do not enroll if you passed all the tests. There is nothing for you to retake and you will only block the spot for other students! We will keep an eye on the capacity and will schedule more exam terms if needed.

Retaking Homework assignments is not possible.

On-site tests

This is the schedule for the on-site tests. The test will be held at the beginning of the lab (maximum duration of the test is 45 minutes).

Week (date) Topic Extension details
05 (Mar 18 - Mar 22) T1: Basic shell scripting (pipes) Second task of similar size
09 (Apr 15 - Apr 19) T2: Using Git CLI (incl. branching/merging and push/pull over SSH) More complex branching
14 (May 20 - May 24) T3: make build tool (tweaking existing setup) More complex setup of make-based project

You are expected to come to the lab you have enrolled to.

If you need to come to some other lab: contact us via a confidential issue, please (at least one week ahead of time and provide a reasonable documentation, please) so that we can find an alternative slot for you (note that attending labs of another course or attending a sporting event is not a valid reason). If you miss the test due to some unexpected circumstances, please, contact us as soon as possible (again, through a confidential issue). Expect that the extra term will probably be held during the examination period.

If you are enrolled to the special Friday lab, check here at the beginning of week 5, please, if a split would be needed (i.e. if half of the students would need to come at 10:40 and the second half at 11:25).

UPDATE: Currently there are more than 30 students enrolled to the Friday lab 23bNSWI177x13. If your SIS login starts with [a-k], please, come at 10:40; if your login starts with [l-z], please, come at 11:30.

UPDATE: Because of a low attendance of the Friday lab 23bNSWI177x13 everyone is expected to come at 10:40. Thank you!

Test will be written on school machines. Make sure you can login there and that your environment is setup comfortably.

Your solution will be submitted through GitLab or through other Git repository: make sure you can perform a clone via a command-line client (for the first test during week 5 you will be able to use the GitLab web UI but using the CLI tools is actually simpler and you will know all the necessary commands from Lab 04).

You are allowed to use our webpages, off-line manpages and you can consult your notes and solutions to examples that are part of the lab materials.

You are not allowed to use any other devices (cell phones, your own laptops etc.), consult other on-line resources (the machines will have restricted access to the Internet anyway) or communicate your solution to other students.

In other words, the on-site tests require that you can solve the tasks on your own with technical documentation only.

Any attempt to bypass the above rules (e.g. trying to search StackOverflow on your cell phone) means failing the course on the spot.

You are free to ask for clarification from your teacher if you do not understand the assignment, obviously. We can provide limited hints if it is clear that you are heading in the right direction and need only a little push.

Sample task (shell pipes)

Write a script (shell pipe) that reads web server logs and prints the day with the highest number of requests.

The logs will use the same format as in the example from the labs:

2023-01-05,02:35:05,192.168.18.232,622,https://example.com/charlie.html
2023-01-05,09:01:33,10.87.6.151,100,https://example.com/bravo.html
2023-01-06,17:25:17,52.168.104.245,1033,https://example.com/delta/delta.html

For the above data we expect that the script will print the following on its output.

2023-01-05

The web log will come on standard input, print the date on standard output. You can safely assume that the input is not corrupted. Do not make any assumption about the order of the input lines.

Notes for the Git CLI exam

Please, see info about Friday lab 23bNSWI177x13 above.

You will be expected to perform the following tasks in Git from the command-line (some might be required to execute on the remote machine linux.ms.mff.cuni.cz).

  • Configure your Git environment (author and e-mail)
  • Clone a repository (from gitolite3@linux.ms.mff.cuni.cz or from GitLab or generic HTTPS)
  • Create a commit
  • Create a branch
  • Switch between branches
  • Merge a branch (and solve any conflicts)
  • Push changes (branches) to server

Ensure that you can clone from gitolite3@linux.ms.mff.cuni.cz when using the school machines. Only authentication via public key will be available (i.e. upload your keys to 05/key.[0-9].pub files in your repository before the exam as explained in Lab 05).

Update: we have added a new CI job to your GitLab repositories that would warn you about typical issues with your keys. Feel free to execute locally via ./bin/run_tests.sh misc/keys.

The following command (replace LOGIN with your SIS/GitLab login in lowercase) will check that.

ssh -o ForwardAgent=no LOGIN@u-pl1.ms.mff.cuni.cz ssh -T gitolite3@linux.ms.mff.cuni.cz

You should see something like the following in the output:

hello LOGIN, this is gitolite3@linux running gitolite3 3.6.13-2.fc39 on git 2.44.0

 R W    lab05-LOGIN
 R      lab07-group-sum-ng

If you see the following, your keys are not setup correctly.

(LOGIN@u-pl1.ms.mff.cuni.cz) Password:
Permission denied, please try again.
Permission denied, please try again.
gitolite3@linux.ms.mff.cuni.cz: Permission denied (publickey,password).

Update to the above.

The command will first ask you for your SIS/GitLab password because you are first authenticating to the school lab machine. From there, you SSH to our server where it will use the key.

If you have setup passphrase protection of your keys, you will need to remove the -T from the command above (and perhaps even add -tt to the first SSH, i.e., ssh -tt -o ForwardAgent=no ...) [see issue #102].

Feel free to store the URL gitolite3@linux.ms.mff.cuni.cz somewhere on the local disk in your $HOME so that you do not have to copy it manually during the exam.

For example, adding export giturl=gitolite3@linux.ms.mff.cuni.cz to your .bashrc (similar to setting $EDITOR) would allow you to call just git clone $giturl:lab05 which might also save you some time.

Update: another option is to setup alias in your ~/.ssh/config like this which yould allow you to clone via git clone exam:lab05-LOGIN.

Host exam
    Hostname linux.ms.mff.cuni.cz
    User gitolite3

The focus of the exam is on working with Git. You will not be required to write any script on your own but we will be working with a repository containing the following script for printing simple bar charts in the console. You will be required to make some small modifications (such as fixing typos) but we will always guide you to the right place in the code.

import argparse
import sys

def parse_config():
    args = argparse.ArgumentParser(description='Console bar plot')
    args.add_argument('--columns', default=60, type=int, metavar='N')
    return args.parse_args()

def load_input(inp):
    values = []
    for line_raw in inp:
        line = line_raw.strip()
        if line.startswith('#') or not line:
            continue
        try:
            val = float(line)
        except ValueError:
            print(f"WARNING: ignoring invalid line '{line}'.", file=sys.stderr)
            continue
        values.append(val)
    return values

def print_barplot(values, scale, symbol):
    for val in values:
        print(symbol * round(val / scale))

def main():
    config = parse_config()
    values = load_input(sys.stdin)
    if not values:
        sys.exit(0)
    coef = max(values) / config.columns
    print_barplot(values, coef, '#')

if __name__ == '__main__':
    main()

Notes for the make exam

Please, see info about Friday lab 23bNSWI177x13 above.

You will be expected to perform the following operations with Makefile (some might be required to execute on the remote machine linux.ms.mff.cuni.cz).

  • Build a project (or its component) using make
  • Understand existing Makefile
  • Make updates to the Makefile
    • Fix list of dependencies
    • Fix command invocation for a particular target
    • Merge multiple rules into a pattern rule
    • Add a new target to Makefile with correctly derived dependencies

Ensure that you can clone from gitolite3@linux.ms.mff.cuni.cz when using the school machines (as for the Git CLI exam). See hints and details above.

Homework assignments

This is the preliminary schedule for the homework assignments and their topics (we expect that the deadline for the second assignment will be inside the examination period).

Weeks Topic Extension details
07 - 10 T4: More complex shell script Extra feature of the main task
12 - 15 T5: Project setup (CI, build tools, Git) Extra feature of the main task

As with on-site tests, your solution will be submitted through some Git repository for evaluation.

For this solution you are allowed to use virtually any resources available, including manual pages, our website, on-line tutorials or services such as ChatGPT or similar.

You must properly cite your sources if you copy (or copy and adapt) your solution (this includes answers from AI tools). You do not have to write down use of manual pages or of the course website.

No matter which sources were used you must be able to understand and explain the design/implementation of your solution. Inability to explain your solution is equivalent to no submission at all.

The homework assignments are individual tasks that must be solved by each student separately. Discussing your solution with your colleagues is fine, submitting their work as yours is prohibited.

Task T4: shell script

Write a web page generator for a task-based tournament.

There can be arbitrary number of teams in the tournament and each team can submit their implementation of the competitive task. The implementation is then evaluated through a set of automated tests and a short log is copied to a well known location.

Your task is to process the output of the automated tests and generate a summary web page. Because the actual generation of HTML is not interesting the task will stop at the boundary of generating a set of Markdown pages.

The input data will be stored in the tasks directory. Each subdirectory corresponds to one task, actual results are in files named by the team name with the .log.gz extension (i.e. it is a plain text file compressed with gzip). Each line in the log file either starts with pass (including the space) or fail or can be safely ignored for our purposes.

It is expected that the shell script can be (eventually) installed in some $PATH-like directory but it will always read data from the current working directory.

You are supposed to generate an overall ordering of the teams where each passed test (i.e., the line starting with pass ) counts as one point (and the points are summed across all tasks). And for each team you also need to prepare a page where a breakdown of points across each task is shown. This page will also link to the original log (you are expected to copy and decompress the log to the same directory where the markdowns are).

As an example, the input directory tree can look like this:

tasks/
├── m01
│   ├── alpha.log.gz
│   └── bravo.log.gz
└── m02
    ├── alpha.log.gz
    ├── bravo.log.gz
    └── charlie.log.gz

And the contents of tasks/m01/alpha.log.gz can be the following (recall that we are interested in pass/fail lines only) after decompression:

pass Test one
fail Test two
  Extra information about failure.
fail Test three

A complete dataset (together with expected output) can be downloaded from the examples repository.

Then we expect to generate the following index.md:

# My tournament

 1. bravo (5 points)
 2. alpha (3 points)
 3. charlie (1 points)

And for each team a special page like this:

# Team alpha

+--------------------+--------+--------+--------------------------------------+
| Task               | Passed | Failed | Links                                |
+--------------------+--------+--------+--------------------------------------+
| m01                |      1 |      2 | [Complete log](m01.log).             |
| m02                |      2 |      0 | [Complete log](m02.log).             |
+--------------------+--------+--------+--------------------------------------+

The output directory would then contain the following files:

out/
├── index.md
├── team-alpha
│   ├── index.md
│   ├── m01.log
│   └── m02.log
├── team-bravo
│   ├── index.md
│   ├── m01.log
│   └── m02.log
└── team-charlie
    ├── index.md
    ├── m01.log
    └── m02.log

Your script must accept -o argument for passing the name of the output directory and -t for specifying alternative index page title (instead of the default My tournament. Your tool must accept both variants of -odirname and -o dirname invocation for both options.

You can safely assume that team names and task names will be only valid C identifiers (i.e. only English alphabet and numbers without any special characters). You can safely assume that task names etc. will fit into the table above (regarding column widths).

You are expected to use temporary files to store intermediate results and you are expected to solve the task completely in shell.

You cannot use AWK, PERL, Python or any other language; using sed is allowed (but probably not needed). You cannot use any advanced Bash features such as (associative) arrays or [[ and (( constructs (use of $(( )), $( ) and test(1) is allowed, though). In other words, we expect you will solve the implementation using constructs and programs you have seen during the labs (and the task is such that you should be able to write it yourself without external help).

We expect you will use external tools to drive your implementation but you must understand the whole script before submitting it and you must mark all parts that were not authored by you personally (and if you are using tools such as ChatGPT, you must submit the whole log of your communication with the tool).

All Shellcheck issues must be resolved for the submission to be accepted.

Extension

Check if a config.rc exists and load it: it might override the title of the whole tournament and output to use different output directory then out.

For example, if config.rc contains the following lines then the title in top-level index.md would read # NSWI177 Tournament and generated files will be stored in output_md/ directory.

title="NSWI177 Tournament"
output="output_md/"

These values might still be overriden by command line switches (i.e. the ordering is default value hard-coded in the script, values in config.rc and -o and -t switches).

In each task/* subdirectory check for meta.rc file that may contain the line name=".." to use a different task name than the directory name. This name will be used instead in the table mentioned above. The name will not use any special characters except space and a dash.

The .rc files are expected to be sourcable to shell script and will contain valid shell constructs. You have to check that the file actually exists, though.

Submission and grading

Submit your solution into exam/t04 subdirectory in your NSWI177 repository.

Store the script into exam/t04/tournament.sh (do not split it into multiple files). When you copy fragments from sites such as StackOverflow we expect you will comment them directly in the tournament.sh. Communication with AI-driven sites store into ai.log file (plain text file with clearly marked portions with your input and with the answers).

If you create further testing datasets, feel free to store them into demo subdirectory (similarly to our inputs, e.g. demo/one/tasks and demo/one/output with expected output).

We provide only basic automated tests: feel free to extend them yourself. Note that diff can be used with -r to compare whole directory subtrees which might come useful when comparing the generated output with the expected one.

The grading of the task is only passed/failed (and passed with extension). When evaluating your implementation we will check that it passes basic functionality checks and then check the quality of your script by a combination of automated checks and manual inspection. This will include checks that the script removes temporary files after itself, does not remove existing files randomly etc. For the quality we will look at variable names, comments of the overall structure or decomposition into functions. The implementation will pass if the amount of such issues is within reasonable limits.

We intentionally do not provide a comprehensive checklist as that would lead to optimization against the checklist instead of thinking how to write a reasonable implementation.

The submission deadline is 2024-05-05.

Task T5: project setup

Setup a CI for Python based project and prepare it for further distribution.

We expect you will use external tools to drive your implementation but you must understand all the parts before submitting it and you must mark all parts that were not authored by you personally (and if you are using tools such as ChatGPT, you must submit the whole log of your communication with the tool).

Context

In this task you will be working with a simple Python project that is able to render Jinja templates.

As a trivial example (which you can also find in the examples subdirectory of the project repository) it will be able to perform the following transformation.

We will have the following array (list) in JSON:

[
  {
     "name": "Introduction to Linux",
     "code": "NSWI177",
     "homepage": "https://d3s.mff.cuni.cz/teaching/nswi177/"
  },
  {
     "name": "Operating Systems",
     "code": "NSWI200",
     "homepage": "https://d3s.mff.cuni.cz/teaching/nswi200/"
  }
]

We will have the following input file:

Our courses
===========

Below is a list of (almost) all of our courses.

And we will have the following template. Note that control structures of the template use {% (or {%- to strip surrounding whitespace) and {{ for variable expansion.

{{ content }}

{%- for course in data -%}
 * [{{ course.name }} ({{ course.code }})]({{ course.homepage }}) {{ NL }}
{%- endfor %}

When executed with our renderer, we will get the following output.

Our courses
===========

Below is a list of (almost) all of our courses.

* [Introduction to Linux (NSWI177)](https://d3s.mff.cuni.cz/teaching/nswi177/)
* [Operating Systems (NSWI200)](https://d3s.mff.cuni.cz/teaching/nswi200/)

Source code

The source code for the above implementation was already prepared for you and soon you will have access to a new project under teaching/nswi177/2024 subtree in GitLab where you will work.

Do not copy this code to your normal submission repository and work in the new t05-LOGIN repository (except for the ai.log as explained below).

The repository also contains several tests. There are unit tests in Python (using pytest) as well as higher-level tests (let us call them integration tests even that might be a bit overstated) written in BATS.

The invocation is described in the project README.

Commit only files that ought to be committed. Definitely do not commit your virtual environment directories, __pycache__ subdirectories or Pythonic .egg and .whl files.

Base task

Your main task is to setup basic CI on GitLab for this project and prepare the project for distribution.

The CI must execute the Python unit tests and fail the job on any error. The preparation for distribution means that after your changes we will be able to install the templater via pip and have the templater available as nswi177-jinja-templater command.

Your task is not to copy the project to PyPI but only setup your repository on our GitLab.

In other words, the following commands would install the templater into a fresh virtual environment and the invocation of the last command would print a short help from our program (we assume we are in an empty directory that is not related in any way to any clone of your project).

python3 -m venv templater-venv
. ./templater-venv/bin/activate
pip install git+ssh://git@gitlab.mff.cuni.cz/teaching/nswi177/2024/t05-LOGIN
nswi177-jinja-templater --help

And for the CI we expect that your project would have a job unittests running the Python tests.

Please, make sure you name the job unittests and you keep all your CI configuration in a single .gitlab-ci.yml file.

The CI should use the python:3.12-alpine image and should install the versions from requirements.txt file. We expect that pytest (and perhaps other test-related libraries) will not be mentioned in the requirements.txt but rather inside requirements-dev.txt file.

You will notice that the tests are not passing as some of the Jinja filters are not implemented.

Do not worry, though. Your coworker Alice already implemented them in her branch in her repository gitolite3@linux.ms.mff.cuni.cz:t05-alice.

Merge her implementation into your repository to get the full implementation.

The merging is a required part of the task and we require that you perform a normal merge (or a fast-forward) but never a rebase.

If you do everything correctly, your CI job log would look like this (we have blurred the used commands for obvious reasons).

To summarize the subtasks for the base (mandatory) part of T05.

  • Merge Alice implementation.
  • Setup pyproject.toml, setup.cfg and requirements.txt (and requirements-dev.txt) for t05-login project on GitLab.
  • Setup GitLab CI that runs the Python tests in unittests job.

Optional extension

Optionally, you will extend the CI to also execute the BATS tests.

We expect that you will add a new job called integrationtests that installs the package (so that the nswi177-jinja-templater command is available) and run all the BATS files in the tests subdirectory (recall that simple pip install . should work like a charm).

You will need to install bats via apk add first (it is okay to install this package on every CI run).

Note that the current implementation in common.bash invokes the program via call to env PYTHONPATH=src python3 -m nswi177.templater – change this to call nswi177-jinja-templater instead. You will also need to replace --kill-after switch for the timeout command to plain -k as the long variant is not supported in Alpine.

The amount of BATS tests is quite low so you should also merge the work of your co-workers Bob and Charlie into your repository. Again, perform a normal merge and not a rebase when integrating their work.

Bob has his repository at gitolite3@linux.ms.mff.cuni.cz:t05-bob while Charlie put his copy to our website to https://d3s.mff.cuni.cz/f/teaching/nswi177/202324/t05-charlie.git.

In case of conflicts make sure you resolve them correctly – you certainly do not want to drop any of the tests.

To summarize the subtasks for the (optional) extension of T05.

  • Merge tests from Bob and Charlie.
  • Fix invocation of the command in BATS tests.
  • Fix call to timeout to be timeout -k 30 "${timeout}" "$@".
  • Add integrationtests job to run all the BATS tests.

Submission and grading

Submit your solution into the t05-LOGIN repository on GitLab.

We will check that the project can be installed via pip install git+https://gitlab.../t05-LOGIN/ and that your CI configuration is correct (after all the merges your jobs should be in the green). We will check that the Python code can be executed after pip install -r requirements.txt via python3 -m src.nswi177.templater (or via python -m nswi177.templater if the package itself is also installed) and that the Pytests can be run after installing from requirements-dev.txt.

When you copy fragments from sites such as StackOverflow we expect you will comment them directly in the appropriate sources, communication with AI-driven sites should be stored into exam/t05/ai.log file in your normal submission repository (student-LOGIN), again a plain text file with clearly marked portions with your input and with the answers.

The submission deadline is 2024-06-16.