This page describes how to read results of your automated tests in your repository (note that the screenshots might be slightly different for your project).

When you open your course project, you will notice that there is either a green or red icon next to your last commit message representing state of your project.

Actually, it also has testing in progress state but that one will be quickly replaced by the final outcome.

Clicking on the icon will show detailed record of the actual tests.

The tests are split into groups that always capture topics related to a single lab (and whether they come from before- or post-class tasks).

Thus you can easily see your overall status and you can quickly check that you are ready for next lab.

Opening any of the groups will show you a transcript of actions that were performed by GitLab for you.

The important part starts with bin/tests.sh followed by text similar to the following.

1..4
ok 1 01/project_name.py - submitted (informative)
not ok 2 01/project_name.py - no readme files at all (ok=20)
# -- output differs --
# expected : project_directory
# actual   : user/project_directory
# --
not ok 3 02/architecture.sh - Submitted (informative)
# ... (output shortened)
#
# File 02/architecture.sh was not submitted.
ok 4 02/architecture.sh - Works (ok=30) # skip File 02/architecture.sh was not submitted.

This is a Test Anything Protocol (TAP) output that says that it executed for tests (1..4) and two of them were fine (ok 1 ...) while others failed (not ok 2 ...).

The message for passing test is rather short – it merely states the name of the test (01/factor.py - submitted (informative)). We will use this format of the name for tests that merely check that the file is present in your project.

For a failing test, the message again contains the name of the test (01/project_name.py - no readme files at all (ok=20)) followed by further description of the failure. TAP uses comment-like syntax of prefixing the lines with extra details with the hash sign (#).

There we can see that the test expected some output (i.e. project_directory) but our solution prints something else (user/project_directory in this example).

Test 3 failed because the file was not submitted. Note that the fourth test is marked as okay even though the test was skipped (we have not executed the actual test if the file was not submitted at all). That is an unfortunate limitation of TAP protocol that skipped tests are not more visible but the reasoning is that skipped test is usually not considered a failure.

This should give you enough information of what went wrong. In this case, the program was written in Python so you can debug it yourself as you see fit on your own machine.

Please, do not use GitLab as your development platform. Always debug and implement your solution locally – GitLab cannot serve as a replacement for a missing installation of Python, Linux or other software on your machine. The jobs running the tests are executed on a shared machine – let others use the machine as well.

Each test also contains grading-related information in parenthesis.

Word informative means that the test directly does not influence the grading but provides extra information. There will be always such tests to check that you have submitted your solution under the right filename. We will not accept solutions submitted with different filenames. Also bear in mind, please, that Linux is case-sensitive in its file-naming: project_name.py and PROJECT_name.py are two different files.

Pattern ok=20 specifies roughly the amount of points assigned for passing this test. The number may be slightly adjusted during our evaluation if the need arises. Do not plan to pass with minimum points exactly based on number assigned to these tests. Their purpose is to represent importance of a particular test, not an exact amount of points.

For some tests you will see a pattern of fail=-10 that represents amount of points that will be subtracted if the test fails. We will use these for tests checking things you should already be familiar with (e.g. that you do not use import * in Python).

We will always grade your solution at the time of deadline (i.e. we will look at the state of your project at deadline and use that state to run the tests and compute the points). Feel free to store incomplete solutions, the amount of commits (resubmissions) is not limited (but, please, do not use GitLab instead of your development platform).

GitLab also sends you an e-mail when any of these automated actions fails (in GitLab terminology, these are Pipelines and Jobs). You can turn the notifications off too if you prefer to check the results manually.

We will evaluate and assign points based on the published tests so you will know how many points you will get. We might alter the amount of points in special cases.

We will penalize attempts to game the tests in any way.

For any kind of quizzes (e.g. for textual answers or in multiple choice questions), the automated tests merely check that the answer makes sense (i.e., that you have provided a number when number was expected, not that the number is actually correct).