You will receive access to a new repository with the API designed by another team. Your task is to write tests for the public API and setup CI for the project. This task is done individually, which means that you should receive two sets of tests for your API.

API tests

The purpose of this task is to practice writing tests against a public API without access to the implementation (i.e. black-box style testing). Your tests should cover all the functionality provided by the library — review the API requirements from task 1, check whether the library provides some extra functionality and write the tests.

The character of your tests will depend on the API provided by the library. Libraries providing a declarative API may require setting up different situations using data classes and checking that the library had the expected effect after parsing a specific command line. Test for libraries providing an imperative interface may look just like a set of unit tests, because the API might provide you with a set of classes full of public methods.

Keep in mind that you should test the API from the perspective of the client code. This means that your test classes should be in a different package (to make package private elements inaccessible), and should not rely on “friend” access (in languages where that applies).

Use the requirements to define the expected behavior in the tests. If the requirements do not clearly define some behavior, check if the API defines it (in the sample client code or in the documentation) and write tests according to that. If in doubt, define the expected behavior yourself using the tests. Do not forget to test failure cases, i.e., situations in which you either create an invalid parser configuration or provide invalid input on the command line.

Because there is no implementation, your tests will (obviously) fail. While that can be somewhat discouraging, a failing test is also the primary driving force in Test Driven Development, because it makes the expected behavior explicit and thus makes it clear what needs to be implemented.

It is difficult to specify the number of tests that your solution needs to provide, but experience from past years shows that the quality of the solution clearly correlates with the number of tests. Solutions with 30+ test methods were markedly better than solutions with less than 20. Note, however, that all test methods are not created equal.

A single parametric method can test a single scenario with different combinations of data, producing a high number of actual “test instances”. A plain test method usually tests a single scenario with a single piece of data and if multiple test methods are used to test a single scenario with different inputs, a solution with 30 such methods can be considered inadequate because it may represent only 30 “test instances”.

You are expected to use parametric test methods to capture scenarios and use the parameters to test the scenarios with different data inputs. Solutions with 20 parametric test methods are rarely considered inadequate, unlike solutions with zero parametric methods.

Use common sense to provide a reasonable set of tests that you believe would help your colleagues drive their implementation and also what you would expect from the other students creating tests for your solution.

Finally, use a well-known unit testing framework that nicely integrates with the language; avoid inventing your own. Just keep in mind that even though you will be using a unit testing framework, your tests are not really unit tests.

CI in GitLab

The second part of this task is to setup a simple CI for the project. Simply put, configure the project so that tests are executed automatically on each commit (push) to GitLab.

If you have never configured CI in GitLab, we suggest to consult the reference manual and examples for unit testing.

Suggestions

The following is a short summary of things to watch out for in your tests, because they keep appearing over and over in the submitted tests.

Naming

  • Test method names should reflect the situation being tested and the expected outcome.

    • StackTest.popFailsOnEmptyStack() is a better name than StackTest.testPop().

    • Avoid test prefix in the name if you language supports annotations, e.g., @Test (Java) or [Test] (C#), or similar.

    • Avoid should in the test method name, just describe the expected outcome.

    • Like in any other names, avoid sequential numbering of methods, e.g., testFoo1(), testFoo2(), etc.

  • Prefer the language of the problem, not the implementation. There are even libraries (e.g., catch for C++) where you don’t have to deal with test method names and just use plain English descriptions.

Coverage

  • Cover different situations in a systematic fashion, based on the requirements from task-1, and organize the tests accordingly (i.e., not necessarily by classes, but by scenarios).

  • Don’t forget to test robustness with respect to invalid inputs. The API should be explicit in how it reacts to such inputs — error handling IS part of design.

Implementation

  • Make the test code obvious for the reader.

    • The reader must be able to see that you are arranging the right situation, executing the right action, and performing the right asserts to determine the test outcome.

    • In other words, strive for the Arrange-Act-Assert (AAA) structure in your test code.

  • Keep tests small and test one situation at a time.

    • This also means that any testable conditions from the arrange phase should be tested using separate tests (not asserts). The assert phase should only concern the particular situation being tested.
  • Make test inputs “visible”.

    • Unlike non-test code, constant literals are tolerated and often preferable in tests, because they are part of the scenario being tested (e.g., invalid inputs, boundary conditions).

Exceptions

  • Declare expected exceptions using test annotations, don’t catch them.

    • Unexpected exception should crash (fail) the test.

    • If multiple statements may throw the expected exception, use the assert throws facility of your testing framework, which typically provides an assertThrows() method which takes code to execute as a lambda and an exception type to expect. This may also indicate that the API does not provide exception types corresponding to the provided level of abstractions.

Parametrization

  • Use parametric (theory) tests for cases where everything but the input data changes and provide the data in a declarative way.

    • Strive to keep the tests obvious even when using parametric (theory) tests. This typically requires capturing the expected outcomes in the provided data. You can always comment on the data, or include an extra (dummy) string in the data containing a textual description.
  • Avoid code duplication (while keeping the code obvious).

    • Extract common parts into test setup phase or create parametric helper methods (i.e., build a “vocabulary”) to setup various scenarios. Just make sure the code remains readable.

Sources

  • Treat test code with the same care as your other (non-test) code.

    • Put the code in the right package, split the code into multiple files where applicable, use white space (empty lines) to separate elements (methods), etc.

Submission

When you are satisfied with your solution to this task, make sure it is in the master branch of your task-4 repository and tag the commit with the task-4-submission tag.

Note on tagging

Use the git tag task-4-submission command instead of putting the tag name (or a hashtag) into a commit message — that is not a proper Git tag. Then push the tag to GitLab by using the git push --tags command.