This page describes how the on-site tests will look like and eventually will also contain description of the homework assignments. Please, have a look at the course guide page for details how the grading works.

On-site tests

This is the schedule for the on-site tests. The test will be held at the beginning of the lab (maximum duration of the test is 45 minutes).

Week (date) Topic Extension details
05 (Mar 18 - Mar 22) T1: Basic shell scripting (pipes) Second task of similar size
09 (Apr 15 - Apr 19) T2: Using Git CLI (incl. branching/merging and push/pull over SSH) More complex branching
14 (May 20 - May 24) T3: make build tool (tweaking existing setup) More complex setup of make-based project

You are expected to come to the lab you have enrolled to.

If you need to come to some other lab: contact us via a confidential issue, please (at least one week ahead of time and provide a reasonable documentation, please) so that we can find an alternative slot for you (note that attending labs of another course or attending a sporting event is not a valid reason). If you miss the test due to some unexpected circumstances, please, contact us as soon as possible (again, through a confidential issue). Expect that the extra term will probably be held during the examination period.

If you are enrolled to the special Friday lab, check here at the beginning of week 5, please, if a split would be needed (i.e. if half of the students would need to come at 10:40 and the second half at 11:25).

UPDATE: Currently there are more than 30 students enrolled to the Friday lab 23bNSWI177x13. If your SIS login starts with [a-k], please, come at 10:40; if your login starts with [l-z], please, come at 11:30.

UPDATE: Because of a low attendance of the Friday lab 23bNSWI177x13 everyone is expected to come at 10:40. Thank you!

Test will be written on school machines. Make sure you can login there and that your environment is setup comfortably.

Your solution will be submitted through GitLab or through other Git repository: make sure you can perform a clone via a command-line client (for the first test during week 5 you will be able to use the GitLab web UI but using the CLI tools is actually simpler and you will know all the necessary commands from Lab 04).

You are allowed to use our webpages, off-line manpages and you can consult your notes and solutions to examples that are part of the lab materials.

You are not allowed to use any other devices (cell phones, your own laptops etc.), consult other on-line resources (the machines will have restricted access to the Internet anyway) or communicate your solution to other students.

In other words, the on-site tests require that you can solve the tasks on your own with technical documentation only.

Any attempt to bypass the above rules (e.g. trying to search StackOverflow on your cell phone) means failing the course on the spot.

You are free to ask for clarification from your teacher if you do not understand the assignment, obviously. We can provide limited hints if it is clear that you are heading in the right direction and need only a little push.

Sample task (shell pipes)

Write a script (shell pipe) that reads web server logs and prints the day with the highest number of requests.

The logs will use the same format as in the example from the labs:

2023-01-05,02:35:05,192.168.18.232,622,https://example.com/charlie.html
2023-01-05,09:01:33,10.87.6.151,100,https://example.com/bravo.html
2023-01-06,17:25:17,52.168.104.245,1033,https://example.com/delta/delta.html

For the above data we expect that the script will print the following on its output.

2023-01-05

The web log will come on standard input, print the date on standard output. You can safely assume that the input is not corrupted. Do not make any assumption about the order of the input lines.

Notes for the Git CLI exam

Please, see info about Friday lab 23bNSWI177x13 above.

You will be expected to perform the following tasks in Git from the command-line (some might be required to execute on the remote machine linux.ms.mff.cuni.cz).

  • Configure your Git environment (author and e-mail)
  • Clone a repository (from gitolite3@linux.ms.mff.cuni.cz or from GitLab or generic HTTPS)
  • Create a commit
  • Create a branch
  • Switch between branches
  • Merge a branch (and solve any conflicts)
  • Push changes (branches) to server

Ensure that you can clone from gitolite3@linux.ms.mff.cuni.cz when using the school machines. Only authentication via public key will be available (i.e. upload your keys to 05/key.[0-9].pub files in your repository before the exam as explained in Lab 05).

Update: we have added a new CI job to your GitLab repositories that would warn you about typical issues with your keys. Feel free to execute locally via ./bin/run_tests.sh misc/keys.

The following command (replace LOGIN with your SIS/GitLab login in lowercase) will check that.

ssh -o ForwardAgent=no LOGIN@u-pl1.ms.mff.cuni.cz ssh -T gitolite3@linux.ms.mff.cuni.cz

You should see something like the following in the output:

hello LOGIN, this is gitolite3@linux running gitolite3 3.6.13-2.fc39 on git 2.44.0

 R W    lab05-LOGIN
 R      lab07-group-sum-ng

If you see the following, your keys are not setup correctly.

(LOGIN@u-pl1.ms.mff.cuni.cz) Password:
Permission denied, please try again.
Permission denied, please try again.
gitolite3@linux.ms.mff.cuni.cz: Permission denied (publickey,password).

Update to the above.

The command will first ask you for your SIS/GitLab password because you are first authenticating to the school lab machine. From there, you SSH to our server where it will use the key.

If you have setup passphrase protection of your keys, you will need to remove the -T from the command above (and perhaps even add -tt to the first SSH, i.e., ssh -tt -o ForwardAgent=no ...) [see issue #102].

Feel free to store the URL gitolite3@linux.ms.mff.cuni.cz somewhere on the local disk in your $HOME so that you do not have to copy it manually during the exam.

For example, adding export giturl=gitolite3@linux.ms.mff.cuni.cz to your .bashrc (similar to setting $EDITOR) would allow you to call just git clone $giturl:lab05 which might also save you some time.

Update: another option is to setup alias in your ~.ssh/config like this which yould allow you to clone via git clone exam:lab05-LOGIN.

Host exam
    Hostname linux.ms.mff.cuni.cz
    User gitolite3

The focus of the exam is on working with Git. You will not be required to write any script on your own but we will be working with a repository containing the following script for printing simple bar charts in the console. You will be required to make some small modifications (such as fixing typos) but we will always guide you to the right place in the code.

import argparse
import sys

def parse_config():
    args = argparse.ArgumentParser(description='Console bar plot')
    args.add_argument('--columns', default=60, type=int, metavar='N')
    return args.parse_args()

def load_input(inp):
    values = []
    for line_raw in inp:
        line = line_raw.strip()
        if line.startswith('#') or not line:
            continue
        try:
            val = float(line)
        except ValueError:
            print(f"WARNING: ignoring invalid line '{line}'.", file=sys.stderr)
            continue
        values.append(val)
    return values

def print_barplot(values, scale, symbol):
    for val in values:
        print(symbol * round(val / scale))

def main():
    config = parse_config()
    values = load_input(sys.stdin)
    if not values:
        sys.exit(0)
    coef = max(values) / config.columns
    print_barplot(values, coef, '#')

if __name__ == '__main__':
    main()

Homework assignments

This is the preliminary schedule for the homework assignments and their topics (we expect that the deadline for the second assignment will be inside the examination period).

Weeks Topic Extension details
07 - 10 T4: More complex shell script Extra feature of the main task
12 - 15 T5: Project setup (CI, build tools, Git) Extra feature of the main task

As with on-site tests, your solution will be submitted through some Git repository for evaluation.

For this solution you are allowed to use virtually any resources available, including manual pages, our website, on-line tutorials or services such as ChatGPT or similar.

You must properly cite your sources if you copy (or copy and adapt) your solution (this includes answers from AI tools). You do not have to write down use of manual pages or of the course website.

No matter which sources were used you must be able to understand and explain the design/implementation of your solution. Inability to explain your solution is equivalent to no submission at all.

The homework assignments are individual tasks that must be solved by each student separately. Discussing your solution with your colleagues is fine, submitting their work as yours is prohibited.

Task T4: shell script

Write a web page generator for a task-based tournament.

There can be arbitrary number of teams in the tournament and each team can submit their implementation of the competitive task. The implementation is then evaluated through a set of automated tests and a short log is copied to a well known location.

Your task is to process the output of the automated tests and generate a summary web page. Because the actual generation of HTML is not interesting the task will stop at the boundary of generating a set of Markdown pages.

The input data will be stored in the tasks directory. Each subdirectory corresponds to one task, actual results are in files named by the team name with the .log.gz extension (i.e. it is a plain text file compressed with gzip). Each line in the log file either starts with pass (including the space) or fail or can be safely ignored for our purposes.

It is expected that the shell script can be (eventually) installed in some $PATH-like directory but it will always read data from the current working directory.

You are supposed to generate an overall ordering of the teams where each passed test (i.e., the line starting with pass ) counts as one point (and the points are summed across all tasks). And for each team you also need to prepare a page where a breakdown of points across each task is shown. This page will also link to the original log (you are expected to copy and decompress the log to the same directory where the markdowns are).

As an example, the input directory tree can look like this:

tasks/
├── m01
│   ├── alpha.log.gz
│   └── bravo.log.gz
└── m02
    ├── alpha.log.gz
    ├── bravo.log.gz
    └── charlie.log.gz

And the contents of tasks/m01/alpha.log.gz can be the following (recall that we are interested in pass/fail lines only) after decompression:

pass Test one
fail Test two
  Extra information about failure.
fail Test three

A complete dataset (together with expected output) can be downloaded from the examples repository.

Then we expect to generate the following index.md:

# My tournament

 1. bravo (5 points)
 2. alpha (3 points)
 3. charlie (1 points)

And for each team a special page like this:

# Team alpha

+--------------------+--------+--------+--------------------------------------+
| Task               | Passed | Failed | Links                                |
+--------------------+--------+--------+--------------------------------------+
| m01                |      1 |      2 | [Complete log](m01.log).             |
| m02                |      2 |      0 | [Complete log](m02.log).             |
+--------------------+--------+--------+--------------------------------------+

The output directory would then contain the following files:

out/
├── index.md
├── team-alpha
│   ├── index.md
│   ├── m01.log
│   └── m02.log
├── team-bravo
│   ├── index.md
│   ├── m01.log
│   └── m02.log
└── team-charlie
    ├── index.md
    ├── m01.log
    └── m02.log

Your script must accept -o argument for passing the name of the output directory and -t for specifying alternative index page title (instead of the default My tournament. Your tool must accept both variants of -odirname and -o dirname invocation for both options.

You can safely assume that team names and task names will be only valid C identifiers (i.e. only English alphabet and numbers without any special characters). You can safely assume that task names etc. will fit into the table above (regarding column widths).

You are expected to use temporary files to store intermediate results and you are expected to solve the task completely in shell.

You cannot use AWK, PERL, Python or any other language; using sed is allowed (but probably not needed). You cannot use any advanced Bash features such as (associative) arrays or [[ and (( constructs (use of $(( )), $( ) and test(1) is allowed, though). In other words, we expect you will solve the implementation using constructs and programs you have seen during the labs (and the task is such that you should be able to write it yourself without external help).

We expect you will use external tools to drive your implementation but you must understand the whole script before submitting it and you must mark all parts that were not authored by you personally (and if you are using tools such as ChatGPT, you must submit the whole log of your communication with the tool).

All Shellcheck issues must be resolved for the submission to be accepted.

Extension

Check if a config.rc exists and load it: it might override the title of the whole tournament and output to use different output directory then out.

For example, if config.rc contains the following lines then the title in top-level index.md would read # NSWI177 Tournament and generated files will be stored in output_md/ directory.

title="NSWI177 Tournament"
output="output_md/"

These values might still be overriden by command line switches (i.e. the ordering is default value hard-coded in the script, values in config.rc and -o and -t switches).

In each task/* subdirectory check for meta.rc file that may contain the line name=".." to use a different task name than the directory name. This name will be used instead in the table mentioned above. The name will not use any special characters except space and a dash.

The .rc files are expected to be sourcable to shell script and will contain valid shell constructs. You have to check that the file actually exists, though.

Submission and grading

Submit your solution into exam/t04 subdirectory in your NSWI177 repository.

Store the script into exam/t04/tournament.sh (do not split it into multiple files). When you copy fragments from sites such as StackOverflow we expect you will comment them directly in the tournament.sh. Communication with AI-driven sites store into ai.log file (plain text file with clearly marked portions with your input and with the answers).

If you create further testing datasets, feel free to store them into demo subdirectory (similarly to our inputs, e.g. demo/one/tasks and demo/one/output with expected output).

We provide only basic automated tests: feel free to extend them yourself. Note that diff can be used with -r to compare whole directory subtrees which might come useful when comparing the generated output with the expected one.

The grading of the task is only passed/failed (and passed with extension). When evaluating your implementation we will check that it passes basic functionality checks and then check the quality of your script by a combination of automated checks and manual inspection. This will include checks that the script removes temporary files after itself, does not remove existing files randomly etc. For the quality we will look at variable names, comments of the overall structure or decomposition into functions. The implementation will pass if the amount of such issues is within reasonable limits.

We intentionally do not provide a comprehensive checklist as that would lead to optimization against the checklist instead of thinking how to write a reasonable implementation.

The submission deadline is 2024-05-05.

Task T5: project setup

Details will appear here around week 12.