This repository contains the backend of the new reservation platform for city of Helsinki. Its main purpose is to act as a backend for tilavarauspalvelu-ui through the GraphQL API.
For more detailed information, please refer to the Tilavarauspalvelu page in Confluence. This is also where you can find the list of members of the project. The preferred contact method is through Helsinki City Slack.
- PostgreSQL (with the PostGIS extension) for database needs
- Redis for in-memory caching
- Celery for scheduling and background task handling
- Elasticsearch you know, for search
- Poetry for dependency management
- Docker for containerization
- Django as the web framework
- Graphene as the GraphQL framework
- Make (Windows, Mac) for running common commands
- Authentication with Tunnistamo
- Profile data from Helsinki Profile
- Opening hours from Hauki
- Unit information from Toimipaikkarekisteri
- Payments are handled by Helsinki Web Shop
These instructions will set up the backend for local development using Docker. This is recommended for especially for frontend developers, as it requires fewer dependencies.
Before we start, make sure Docker and Make are installed your system. Then, follow the steps below.
- Copy
.env.example
to.env
.
cp .env.example .env
- Build and run backend with Docker.
make run
You should now be able to log into Django admin panel at localhost:8000/admin/
.
GraphQL endpoint is at localhost:8000/graphql/
.
To generate test data, follow the steps below.
- Connect to running container.
make bash
- (Re)create elastic search index.
make indices
- Generate test data.
make generate
These instructions will set up the backend for local development without Docker. This is mainly for backend developers, as it requires more dependencies and setup.
Before you start, you'll need the following dependencies:
- CPython (check
pyproject.toml
for version) - Poetry (latest version)
- Make (Windows, Mac)
- PostgreSQL (with the PostGIS extension) (version 13 or newer)
- Redis (version 7 or newer)
- Elasticsearch (version 8.8)
- GDAL (version compatible with Django, check their documentation for more info)
- Ubuntu:
sudo apt-get install gdal-bin
- Mac:
brew install gdal
- Windows: Use WSL. We haven't gotten GDAL & Django to work on Windows yet.
- Ubuntu:
- gettext
- Ubuntu:
sudo apt-get install gettext
- Mac:
brew install gettext
- Windows: https://mlocati.github.io/articles/gettext-iconv-windows.html
- Ubuntu:
Installation instructions for the dependencies will depend on the OS and can change over time, so please refer to the official documentation for each dependency on how to set them up correctly.
You can skip the dependencies for Postgres, Redis, and Elasticsearch by running them using Docker. To do this, install Docker and run
make services
.
Now, follow the steps below.
- Copy
.env.example
to.env
.
cp .env.example .env
This file contains environment variables used by the project. You can modify these to suit your local development environment.
- Copy
local_settings_example.py
tolocal_settings.py
.
cp local_settings_example.py local_settings.py
These can be used to modify settings for local development without changing the main settings file.
- Create a virtual environment & install dependencies.
poetry install
- Add pre-commit hooks
poetry run make hooks
- Run migrations
poetry run make migrate
- (Re)create elastic search indices
poetry run make indices
- Generate test data
poetry run make generate
- Start the server
poetry run make dev
Backend should now be running at localhost:8000
.
Tests are run with pytest
.
Some flags that can save time when running tests:
- To skip slow-running tests:
pytest --skip-slow
- To skip tests requiring Elasticsearch:
pytest --skip-elastic
- To retain test database between runs:
pytest --reuse-db
- To skip migration-checks at the start of tests:
pytest --no-migrations
- To run tests in parallel:
pytest -n 8 --dist=loadscope
(=8 cores, use-n auto
to use all available cores)
You can use a pytest.ini
file to set up flags for local development.
Dependencies are managed by Poetry. Normally, they are automatically updated by dependabot without any manual intervention (given updates don't fail any automated tests).
However, if you want to update them manually, you can do so by running:
poetry update
This will update all dependencies according to the rules defined in pyproject.toml
.
To see all outdated dependencies, run:
poetry show --outdated
Note that this will also include any sub-dependencies that are not directly defined in pyproject.toml
.
Scheduled & background tasks are run with Celery.
When developing locally, you can run these tasks in a Celery worker with make celery
.
This uses the filesystem as the message broker.
You'll need to create queue and processed folders according to the
CELERY_QUEUE_FOLDER_OUT
, CELERY_QUEUE_FOLDER_IN
, CELERY_PROCESSED_FOLDER
environment variables (see .env.example
).
If you want to run background tasks synchronously without Celery, set the environment variable
CELERY_ENABLED
to False
. Scheduled tasks still need the worker in order to run.
Authentication is handled by Tunnistamo using the django-helusers library.
You'll need to get the TUNNISTAMO_ADMIN_SECRET
from the Azure Pipelines library
or from a colleague and set that in your .env
file.
Instead of JWTs, authentication is managed with via sessions. See this ADR in confluence for why this decision was made.
Static files are served by the Whitenoise package. These are all files that are not uploaded by the users in Django Admin pages.
Media files are served by the uWSGI static files implementation, offloaded to threads. These are all files uploaded by users in Django Admin pages.
If there are performance issues (I.E. 502 errors from the Application Gateway) it is very likely process count and or process scale-up must be tweaked higher.
Translations are handled by Django's built-in translation system.
GitHub Actions CI will check that all translations are up-to-date during PRs.
To update translations, run make translations
. This will update the .po
files
located in the locale
directory.
For model field translations, we use django-modeltranslation
.
The package has integrations in all the relevant parts of the project
(serializers, admin, etc.). See code for more details.
For debugging during development, the Django Debug Toolbar package can be used. The Django GraphQL Debug Toolbar extension is used for the GraphQL endpoint.
You should add a local_settings.py
on the root level of the project and add
three classes called LocalMixin
, DockerMixin
and AutomatedTestMixin
to it.
These can be used to override settings for local development and automated tests respectively.
Note that in order for development settings to work correctly, you need to set the
DJANGO_SETTINGS_ENVIRONMENT
environment variable to Local
when running the server.
It's recommended to set up ruff-lsp to enable Ruff linting and formatting support in your editor.
In production, Varnish cache is used for reservation unit and purpose images. When new image is uploaded, existing images are removed from the cache using purge task. For more details about how purge is done, check the image cache utility.
In settings there are four configurations:
IMAGE_CACHE_ENABLED
= Toggle caching on/offIMAGE_CACHE_VARNISH_HOST
= Varnish hostnameIMAGE_CACHE_PURGE_KEY
= Secret key for doing purge requestsIMAGE_CACHE_HOST_HEADER
=Host
header value in purge request