Labs: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14.
In the last lab we will put together the knowledge of all the topics we covered so far. We will start with build system tools that allow us capture – and codify – the build process and then we will see how GitLab can be setup to regularly check our code and thus keep our project healthy.
Running example
We will return for the last time to our website generation example and use it as a running example for this lab.
We will again use the simpler version that looked like this:
#!/bin/bash
set -ueo pipefail
pandoc --template template.html index.md >index.html
pandoc --template template.html rules.md >rules.html
./table.py <score.csv | pandoc --template template.html --metadata title="Score" - >score.html
Notice that for index
and rules
, there are Markdown files to generate
HTML from.
Page score
is generated from a CSV data file.
Setup
Please, create a fork of the web repository so that you can try the examples yourself.
Motivation for using build systems
In our running example, the whole website is build in several steps where HTML pages are generated from different sources. That is actually very similar to how software is build from sources (consider sources in C language that are compiled and linked together).
While the above steps do not build an executable from sources (as is the typical case for software development), they represent a typical scenario.
Building a software usually consists of many steps that can include actions as different as:
- compiling source files to some intermediate format
- linking the final executable
- creating bitmap graphics in different resolution from a single vector image
- generating source-code documentation
- preparing localization files with translation
- creating a self-extracting archive
- deploying the software on a web server
- publishing an artefact in a package repository
- …
Almost all of them are simple by themselves. What is complex is their orchestration. That is, how to run them in the correct order and with the right options (parameters).
For example, before an installer can be prepared, all other files have to be prepared. Localization files often depend on precompilation of some sources but have to be prepared before final executable is linked. And so on.
Even for small-size projects, the amount of steps can be quite high yet they are – in a sense – unimportant: you do not want to remember these, you want to build the whole thing!
Note that your IDE can often help you with all of this – with a single click. But not everybody uses the same IDE and you may even not have a graphical interface at all.
Furthermore, you typically want to run the build as part of each commit – the GitLab pipelines we use for tests are a typical example: they execute without GUI yet we want to build the software (and test it too). Codifying this in a build script simplifies this for virtually everyone.
Our build.sh
script mentioned above is actually pretty nice.
It is easy to understand, contains no complex logic and a new member of the
team would not need to investigate all the tiny details and can just run
the single build.sh
script.
The script is nice but it overwrites all files even if there was no change. In our small example, it is no big deal (you have a fast computer, after all).
But in a bigger project where we, for example, compile thousands
of files (e.g. look at source tree of Linux kernel, Firefox, or LibreOffice),
it matters.
If an input file was not changed (e.g. we modified only rules.md
)
we do not need to regenerate the other files
(e.g., we do not need to re-create index.html
).
Let’s extend our script a bit.
...
should_generate() {
local barename="$1"
if ! [ -e "${barename}.html" ]; then
return 0
fi
if [ "${barename}.md" -nt "${barename}.html" ]; then
return 0
else
return 1
fi
}
...
should_generate index && pandoc --template template.html index.md >index.html
should_generate rules && pandoc --template template.html rules.md >rules.html
...
We can do that for every command to speed-up the web generation.
But.
That is a lot of work. And probably the time saved would be all wasted by rewriting our script. Not mentioning the fact that the result looks horrible. And it is rather expensive to maintain.
Also, we often need to build just a part of the project: e.g., regenerate documentation only (without publishing the result, for example). Although extending the script along the following way is possible, it certainly is not viable for large projects.
if [ -z "$1" ]; then
... # build here
elif [ "${1:-}" = "clean" ]; then
rm -f index.html rules.html score.html
elif [ "${1:-}" = "publish" ]; then
cp index.html rules.html score.html /var/www/web-page/
else
...
Luckily, there is a better way.
There are special tools, usually called build systems that have a single purpose: to orchestrate the build process. They provide the user with a high-level language for capturing the above-mentioned steps for building software.
In this lab, we will focus on make
.
make
is a relatively old build system, but it is still widely used.
It is also one of the simplest tools available: you need to specify
most of the things manually but that is great for learning.
You will have full control over the process and you will see what is
happening behind the scene.
make
Move into root directory of (the local clone of your fork of) the web example repository first, please.
The files in this directory are virtually the same as in our shell script above,
but there is one extra file: Makefile
.
Notice that Makefile
is written with capital M to be easily distinguishable
(ls
in non-localized setup sorts uppercase letters first).
This file is a control file for a build system called make
that
does exactly what we tried to imitate in the previous example.
It contains a sequence of rules for building files.
We will get to the exact syntax of the rules soon, but let us play with them first. Execute the following command:
make
You will see the following output (if you have executed some of the commands manually, the output may differ):
pandoc --template template.html index.md >index.html
pandoc --template template.html rules.md >rules.html
make
prints the commands it executes and runs them.
It has built the website for us: notice that the HTML files
were generated.
For now, we do not generate the version.inc.html
file at all.
Execute make
again.
make: Nothing to be done for 'all'.
As you can see, make
was smart enough to recognize that since
no file was changed, there is no need to run anything.
Update index.md
(touch index.md
would work too) and run make
again.
Notice how index.html
was rebuilt while rules.html
remained
untouched.
pandoc --template template.html index.md >index.html
This is called an incremental build (we build only what was needed instead of building everything from scratch).
As we mentioned above: this is not much interesting in our tiny example. However, once there are thousands of input files, the difference is enormous.
It is also possible to execute make index.html
to ask for rebuilding
just index.html
. Again, the build is incremental.
If you wish to force a rebuild, execute make
with -B
.
Often, this is called an unconditional build.
In other words, make
allows us to capture the simple
individual commands needed for a project build
(no matter if we are compiling and linking C programs or generating
a web site) into a coherent script.
It rebuilds only things that need rebuilding and, more interestingly,
it takes care of dependencies. For example, if scores.html
is generated
from scores.md
that is build from scores.csv
, we need only to specify
how to build scores.md
from scores.csv
and how to create scores.html
from scores.md
and make
will ensure the proper ordering.
Makefile
explained
Makefile
is a control file for the build system named make
.
In essence, it is a domain-specific language to simplify setting
up the script with the should_generate
constructs we
mentioned above.
make
distinguishes
tabs and spaces. All indentation in the Makefile must be done
using tabs. You have to make sure that your editor does not expand tabs to spaces.
It is also a common issue when copying fragments from a web-browser.
(Usually, your editor will recognize that Makefile
is a special
file name and switch to tabs-only policy by itself.)
If you use spaces instead, you will typically get an error like
Makefile:LINE_NUMBER: *** missing separator. Stop.
.
The Makefile contains a sequence of rules. A rule looks like this:
rules.html: rules.md template.html
pandoc --template template.html rules.md >rules.html
The name before the colon is the target of the rule.
That is usually a file name that we want to build.
Here, it is rules.html
.
The rest of the first line is the list of dependencies – files from
which the target is built.
In our example, the dependencies are rules.md
and template.html
.
In other words: when these files (rules.md
and template.html
) are modified
we need to rebuild rules.html
.
The third part are the following lines that has to be indented by tab.
They contain the commands that have to be executed for the target to be built.
Here, it is the call to pandoc
.
make
runs the commands if the target is out of date. That is, either the
target file is missing, or one or more dependencies are newer than the target.
The rest of the Makefile
is similar.
There are rules for other files and also several special rules.
Special rules
The special rules are all
, clean
, and .PHONY
.
They do not specify files to be built, but rather special actions.
all
is a traditional name for the very first rule in the file.
It is called a default rule and it is built if you run make
with
no arguments. It usually has no commands and it depends on all files
which should be built by default.
clean
is a special rule that has only commands, but no dependencies.
Its purpose is to remove all generated files if you want to clean up
your work space.
Typically, clean
removes all files that are not versioned
(i.e., under Git control).
This can be considered misuse of make
, but one with a long tradition.
From the point of view of make
, the targets all
and clean
are
still treated as file names. If you create a file called clean
, the
special rule will stop working, because the target will be considered
up to date (it exists and no dependency is newer).
To avoid this trap, you should explicitly tell make
that the target is not
a file. This is done by listing it as a dependency of the special target
.PHONY
(note the leading dot).
Generally, you can see that make
has a plenty of idiosyncrasies.
It is often so with programs which started as a simple tool and underwent
40 years of incremental development, slowly accruing features. Still,
it is one of the most frequently used build systems. Also, it often serves
as a back-end for more advanced tools – they generate a Makefile
from a more friendly specification and let make
do the actual work.
Exercise
Improving the maintainability of the Makefile
The Makefile
starts to have too much of a repeated code.
But make
can help you with that too.
Let’s remove all the rules for generating out/*.html
from *.md
and replace them with:
out/%.html: %.md template.html
pandoc --template template.html -o $@ $<
That is a pattern rule that captures the idea that HTML is generated from Markdown. the percent sign in the dependencies and target specification represents so called stem – the variable (i.e., changing) part of the pattern.
In the command part, we use make
variables.
make
variables start with dollar as in shell but they are not the same.
$@
is the actual target and $<
is the first dependency.
Run make clean && make
to verify that even with pattern rules,
the web is still generated.
Apart from pattern rules, make
also understands (user) variables.
They can improve readability as you can separate configuration from
commands. For example:
PAGES = \
out/index.html \
out/rules.html \
out/score.html
all: $(PAGES) ...
...
Note that unlike in the shell, variables are expanded by the $(VAR)
construct. (Except for the special variables such as $<
.)
Non-portable extensions
make
is a very old tool that exists in many different implementations.
The features mentioned so far should work with any version of make
.
(At least a reasonably recent one. Old make
s did not have .PHONY
or pattern rules.)
The last addition will work in GNU make only (but that is the default on Linux so there shall not be any problem).
We will change the Makefile
as follows:
PAGES = \
index \
rules \
score
PAGES_TMP=$(addsuffix .html, $(PAGES))
PAGES_HTML=$(addprefix out/, $(PAGES_TMP))
We keep only the basename of each page and we compute the output
path. $(addsuffix ...)
and $(addprefix ...)
are calls to built-in
functions. Formally, all function arguments are strings, but in this case,
comma-separated names are treated as a list.
Note that we added PAGES_TMP
only to improve readability when
using this feature for the first time.
Normally, you would only have PAGES_HTML
assigned directly to this.
PAGES_HTML=$(addprefix out/, $(addsuffix .html, $(PAGES)))
This will prove even more useful when we want to generate a PDF for each page, too.
We can add a pattern rule and build the list of PDFs using $(addsuffix .pdf, $(PAGES))
.
GitLab CI
If you have never heard the term continuous integration (CI), then it is the following in a nutshell.
About continuous integration
To ensure that the software you build is in healthy state, you should run tests on it often and fix broken things as soon as possible (because the cost of bug fixes rises dramatically with each day they are undiscovered).
The answer to this is that the developer shall run tests on each commit. Since that is difficult to enforce, it is better to do this automatically. CI in its simplest form refers to state where automated tests (e.g., BATS-based or Python-based) are executed automatically after each push, e.g. after pushing changes to any branch to GitLab.
But CI can do much more: if the tests are passing, the pipeline of jobs can package the software and publish it as an artifact (e.g., as an installer). Or it can trigger a job to deploy to a production environment and make it available to the customers. And so on.
Setting up CI on GitLab
In this text, you will see how to setup GitLab CI to your needs.
The important thing to know is that GitLab CI can run on top of
Podman containers.
Hence, to setup a GitLab pipeline, you choose a Podman image
and the commands which should be executed inside the container.
GitLab will run
the container for you and run your commands in it.
Depending on the outcome of the whole script (i.e., its exit code), it will mark the pipeline as passing or failing.
Recall the task 12/test-in-alpine.txt
: that was actually a preparation for CI.
In this course we will focus on the simplest configuration where we want to execute tests after each commit. GitLab can be configured for more complex tasks where software can be even deployed to a virtual cloud machine but that is unfortunately out of scope.
If you are interested in this topic, GitLab has an extensive documentation. The documentation is often densely packed with a lot of information, but it is a great source of knowledge not only about GitLab, but about many software engineering principles in general.
.gitlab-ci.yml
The configuration of the GitLab CI is stored inside file .gitlab-ci.yml
that has to be stored in the root directory of the project.
We expect you have your own fork of the web repository and that you have
extended the original Makefile
.
We will now setup a CI job that only builds the web. It will be the most basic CI one can imagine. But at least it will ensure that the web is always in a buildable state.
However, it speed things up, we will remove the generation of PDF from
our Makefile
as OpenOffice installation requires downloading of 400MB which
is quite a lot to be done for each commit.
image: fedora:37
build:
script:
- dnf install -y make pandoc
- make
It specifies a pipeline job build (you will see this name
in the web UI) that is executed using
fedora image
and it executes two commands.
The first one installs a dependency and the second one runs make
.
Add the .gitlab-ci.yml
to your Git repository (i.e. your fork),
commit and push it.
If you open the project page in GitLab, you should see the pipeline icon next to it and it should eventually turn green.
The log of the job would probably look like this.
Running with gitlab-runner 15.11.0 (436955cb)
on gitlab.mff docker Mtt-jvRo, system ID: s_7f0691b32461
Preparing the "docker" executor 00:03
Using Docker executor with image fedora:37 ...
Pulling docker image fedora:37 ...
Using docker image sha256:34354ac2c458e89615b558a15cefe1441dd6cb0fc92401e3a39a7b7012519123 for fedora:37 with digest fedora@sha256:e3012fe03ccee2d37a7940c4c105fb240cbb566bf228c609d9b510c9582061e0 ...
Preparing environment 00:00
Running on runner-mtt-jvro-project-11023-concurrent-0 via gitlab-runner...
Getting source from Git repository 00:01
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /builds/horkv6am/nswi-177-web/.git/
Checking out 58653aa3 as detached HEAD (ref is master)...
Removing out/index.html
Removing out/main.css
Removing out/rules.html
Removing out/score.html
Removing tmp/
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:33
Using docker image sha256:34354ac2c458e89615b558a15cefe1441dd6cb0fc92401e3a39a7b7012519123 for fedora:37 with digest fedora@sha256:e3012fe03ccee2d37a7940c4c105fb240cbb566bf228c609d9b510c9582061e0 ...
$ dnf install -y make pandoc
Fedora 37 - x86_64 43 MB/s | 82 MB 00:01
Fedora 37 openh264 (From Cisco) - x86_64 4.3 kB/s | 2.5 kB 00:00
Fedora Modular 37 - x86_64 17 MB/s | 3.8 MB 00:00
Fedora 37 - x86_64 - Updates 24 MB/s | 29 MB 00:01
Fedora Modular 37 - x86_64 - Updates 4.8 MB/s | 2.9 MB 00:00
Dependencies resolved.
================================================================================
Package Architecture Version Repository Size
================================================================================
Installing:
make x86_64 1:4.3-11.fc37 fedora 542 k
pandoc x86_64 2.14.0.3-18.fc37 fedora 21 M
Installing dependencies:
gc x86_64 8.0.6-4.fc37 fedora 103 k
guile22 x86_64 2.2.7-6.fc37 fedora 6.5 M
libtool-ltdl x86_64 2.4.7-2.fc37 fedora 37 k
pandoc-common noarch 2.14.0.3-18.fc37 fedora 472 k
Transaction Summary
================================================================================
Install 6 Packages
Total download size: 29 M
Installed size: 204 M
Downloading Packages:
(1/6): libtool-ltdl-2.4.7-2.fc37.x86_64.rpm 846 kB/s | 37 kB 00:00
(2/6): make-4.3-11.fc37.x86_64.rpm 9.4 MB/s | 542 kB 00:00
(3/6): gc-8.0.6-4.fc37.x86_64.rpm 595 kB/s | 103 kB 00:00
(4/6): pandoc-common-2.14.0.3-18.fc37.noarch.rp 8.4 MB/s | 472 kB 00:00
(5/6): guile22-2.2.7-6.fc37.x86_64.rpm 18 MB/s | 6.5 MB 00:00
(6/6): pandoc-2.14.0.3-18.fc37.x86_64.rpm 56 MB/s | 21 MB 00:00
--------------------------------------------------------------------------------
Total 51 MB/s | 29 MB 00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : pandoc-common-2.14.0.3-18.fc37.noarch 1/6
Installing : libtool-ltdl-2.4.7-2.fc37.x86_64 2/6
Installing : gc-8.0.6-4.fc37.x86_64 3/6
Installing : guile22-2.2.7-6.fc37.x86_64 4/6
Installing : make-1:4.3-11.fc37.x86_64 5/6
Installing : pandoc-2.14.0.3-18.fc37.x86_64 6/6
Running scriptlet: pandoc-2.14.0.3-18.fc37.x86_64 6/6
Verifying : gc-8.0.6-4.fc37.x86_64 1/6
Verifying : guile22-2.2.7-6.fc37.x86_64 2/6
Verifying : libtool-ltdl-2.4.7-2.fc37.x86_64 3/6
Verifying : make-1:4.3-11.fc37.x86_64 4/6
Verifying : pandoc-2.14.0.3-18.fc37.x86_64 5/6
Verifying : pandoc-common-2.14.0.3-18.fc37.noarch 6/6
Installed:
gc-8.0.6-4.fc37.x86_64 guile22-2.2.7-6.fc37.x86_64
libtool-ltdl-2.4.7-2.fc37.x86_64 make-1:4.3-11.fc37.x86_64
pandoc-2.14.0.3-18.fc37.x86_64 pandoc-common-2.14.0.3-18.fc37.noarch
Complete!
$ make
pandoc --template template.html -o out/index.html index.md
pandoc --template template.html -o out/rules.html rules.md
./table.py <score.csv | pandoc --template template.html --metadata title="Score" - >out/score.html
cp main.css out/
Cleaning up project directory and file based variables 00:01
Job succeeded
Note that GitLab will mount the Git repository into the container first
and then execute the commands inside the clone.
The commands are executed with set -e
: the first failing command
terminates the whole pipeline.
Try to emulate the above run locally. Hint. Solution.
Exercise
Other bits
Notice how using the GitLab pipeline is easy. You find the right image, specify your script, and GitLab takes care of the rest.
If you are unsure about which image to choose, official images are a good start. The script can have several steps where you install missing dependencies before running your program.
Recall that you do not need to create a virtual environment: the
whole machine is yours (and would be removed afterwards), so you can install
things globally.
Recall the example above where we executed pip install
without
starting a virtual environment.
There can be multiple jobs defined that are run in parallel (actually, there can be quite complex dependencies between them, but in the following example, all jobs are started at once).
The example below shows a fragment of .gitlab-ci.yml
that tests the project
on multiple Python versions.
# Default image if no other is specified
image: python:3.10
stages:
- test
# Commands executed before each "script" section (for any job)
before_script:
# To have a quick check that the version is correct
- python3 --version
# Install the project
- python3 -m pip install ...
# Run unit tests under different versions
unittests3.7:
stage: test
image: "python:3.7"
script:
- pytest --log-level debug tests/
unittests3.8:
stage: test
image: "python:3.8"
script:
- pytest --log-level debug tests/
unittests3.9:
stage: test
image: "python:3.9"
script:
- pytest --log-level debug tests/
unittests3.10:
stage: test
image: "python:3.10"
script:
- pytest --log-level debug tests/
Before-class tasks (deadline: start of your lab, week May 15 - May 19)
The following tasks must be solved and submitted before attending your lab. If you have lab on Wednesday at 10:40, the files must be pushed to your repository (project) at GitLab on Wednesday at 10:39 latest.
For virtual lab the deadline is Tuesday 9:00 AM every week (regardless of vacation days).
All tasks (unless explicitly noted otherwise) must be submitted to your submission repository. For most of the tasks there are automated tests that can help you check completeness of your solution (see here how to interpret their results).
14/web/Makefile
(100 points, group devel
)
UPDATES
Please, move your solution into 14/web
so that it does not clash
with post-class tasks. If you have your solution committed already, following
commands should do the trick (do not forget to commit the changes).
git mv 14 14-web
mkdir -p 14/
git mv 14-web 14/web
We have added some clarification to some of the parts, these are highlighted.
Because of the above the deadline for this task is shifted by about two days until 9.00AM. If you visit labs on Monday 15.40, we will fetch content of your repository at Wednesday, 9.00. We hope this is enough time to move your files to the new location.
This task will extend the running example that we have used through this lab.
You already did the following (but it is also part of the evaluation of this task):
- Generate
index.html
andrules.html
from respective*.md
files. - Store the generated files in
out/
subdirectory. clean
target removes all files inout/
(except for.gitignore
).
As a new feature, we expect you will extend the example with the following:
-
Move source files to
src/
subdirectory. This is a mandatory part, without this move none of the tests will work. We expect you will move the files yourself, i.e. not during the build. The purpose is to make the directory structure a bit cleaner. There should be thus file14/web/src/index.md
committed in your repository. -
Generate pages from
*.csv
files. There is already generation of thescore.html
fromscore.csv
. We expect you would add your owngroup-a.csv
andgroup-b.csv
files that will be generated togroup-a.html
andgroup-b.html
files (using thetable.py
script as forscore.csv
).group-a.html
andgroup-b.html
should be generated by default. -
Generate pages from
*.bin
files. We expect that the file would have the same basename as the resulting.html
and it will take care of complete content generation. The test createsfrom-news.bin
script for testing this, your solution must use pattern rules with proper stems. -
Add phony target
spelling
that list typos in Markdown files. We expect you will useaspell
for this task and use English as the master language.
Hint #1: use PAGES
variable to set list of generated files as it
simplifies maintenance of your Makefile
.
Hint #2: the following is a simple example of a dynamically generated
webpage that can be stored inside src/news.bin
. The script is a little bit
tricky as it contains data as part of the script and uses $0
to read
itself (similar trick is often used when creating self-extracting
archives for Linux).
#!/bin/bash
set -ueo pipefail
sed '1,/^---NEWS-START-HERE---/d' "$0" | while read -r date comment; do
echo "<dt>$date</dt>"
echo "<dd>$comment</dd>"
done | pandoc --template template.html --metadata title="News" -
exit 0
# Actual news are stored below.
# Add each news item on a separate line
#
# !!! Do not modify the line NEWS-START-HERE !!!
#
---NEWS-START-HERE---
2023-05-01 Website running
2023-05-02 Registration open
Post-class tasks (deadline: June 4)
We expect you will solve the following tasks after attending the labs and hearing feedback to your before-class solutions.
All tasks (unless explicitly noted otherwise) must be submitted to your submission repository. For most of the tasks there are automated tests that can help you check completeness of your solution (see here how to interpret their results).
14/cc/Makefile
(40 points, group devel
)
Convert the shell builder of an executable (built from C sources) into a make-based build.
The sources are in the examples repository
(in 14/cc
).
The Makefile
you create shall offer the following:
- Default target
all
builds theexample
executable. - Special target
clean
removes all intermediary files (*.o
) as well as the final executable (example
). - Object files (
.o
) are built for each file separately, we recommend to use a pattern rule. - Object files must depend on the source file (corresponding
.c
file) as well as on the header file.
Please, commit the source files to your repository as well.
14/group-sum-ci.yml
(60 points, group git
)
Prepare a CI file for your fork of the
group-sum
repository
that will run its tests automatically on each commit.
We expect that you still have your fork; if not, fork it again and do not forget
to merge it with the branch tests
of the repository at
gitolite3@linux.ms.mff.cuni.cz:lab07-group-sum-ng.git
so that you have
the tests.bats
.
We suggest that you change project visibility of your fork to be private.
tests.bats
filename intact.
Create a .gitlab-ci.yml
in this (i.e., your fork of group-sum
)
repository with the right configuration that is able to run the BATS tests.
Check that after each commit (i.e., push) the pipeline is executed and the tests are passing.
Then copy the .gitlab-ci.yml
into your submission repository as
14/group-sum-ci.yml
. Do not reconfigure your submission repository to run
these tests, we are interested in the configuration file only (but make
sure it works for you in your fork).
Learning outcomes
Learning outcomes provide a condensed view of fundamental concepts and skills that you should be able to explain and/or use after each lesson. They also represent the bare minimum required for understanding subsequent labs (and other courses as well).
Conceptual knowledge
Conceptual knowledge is about understanding the meaning and context of given terms and putting them into context. Therefore, you should be able to …
-
explain principles of continuous integration
-
explain advantages of using continuous integration
-
name several steps that are often required to create distributable software (e.g. a package or an installer) from source code and other basic artifacts
-
explain why software build should be a reproducible process
-
explain how it is possible to capture a software build
-
explain concepts of languages that are used for capturing steps needed for software build (distribution)
-
explain in broad sense how GitLab CI works
Practical skills
Practical skills are usually about usage of given programs to solve various tasks. Therefore, you should be able to …
-
build
make
-based project with default settings -
create
Makefile
that drives build of a simple project -
use wildcard rules in
Makefile
-
setup GitLab CI for simple projects
-
optional: use variables in a
Makefile
-
optional: use basic GNU extensions to simplify complex
Makefile
s
This page changelog
-
2023-05-23: Note about committing source files for post-class task.
-
2023-05-14: Tests and clarification for before-class tasks.
-
2023-05-15: Added post-class tasks.