This project can be used to spin-up a droplet (virtual machine) on Digital Ocean.
This droplet is configured on first boot in order to provide various services to end users.
The droplet OS is Ubuntu 20.04 (focal fossa).
- A default user named
lxx
is created with home directory/home/lxx
- This user can use
sudo
without password and is a member ofdocker
group
journald
log system is configured to store logs persistently by default.
- Password authentication is disabled
- SSH keypairs must be used
- Port 22 is used by default
- A single SSH key can connect to both
root
andlxx
user by default
Note: Use the
DO_SSH_KEY_NAME
environment variable to configure the name of SSH key to allow.
- Time synchronisation is enabled using
systemd-timesyncd
. - The default NTP server is
time.windows.com
.
- Docker Engine is installed and can be used using the
docker
command. - Only users in
docker
group can run docker commands withoutsudo
- Engine configuration can be found under
/etc/docker/daemon.json
- A single-node docker swarm cluster is running by default.
- Docker Swarm cluster advertise address is the droplet private address within its VPC (it means that it's possible to create a new droplet in the same VPC and join the swarm cluster)
- Traefik reverse proxy is installed using Docker Swarm and listens on ports
80
and443
on host machine. - Traefik dashboad can be reached on
https://traefik.<domain>
(defaulthttps://traefik.lxx.quest
).
Note: Update
DOMAIN_NAME
to configure domain to use. For example, usingDOMAIN_NAME=test.lxx.quest
leads to Traefik Dashboard listenning on https://traefik.test.lxx.quest
In order to expose docker services using Traefik, containers must be attached to the
traefik-network
(docker overlay network).
- MinIO server is installed using Docker Swarm and is exposed behind Traefik:
- MinIO Dashboard can be reached on
https://minio.<domain>
(defaulthttps://minio.lxx.quest
) - MinIO Server can be reached on
https://storage.<domain>
(defaulthttps://storage.lxx.quest
)
- MinIO Dashboard can be reached on
- Jupyterhub is installed using TLJH
- Jupyter users are authenticated using
systemd
.- Admin user is
lxx
- Password is defined on first connection
- New users can be created from admin panel
- Unix users will be created automatically
- Admin user is
- TLS certificates are available under
/etc/lego/certificates
- Certificates are renewed 30 days before expiration (check performed daily)
-
A Linux system is required to contribute to the project.
-
A Linux system is required to use the project.
Note:
terraform
can be used on Windows, as well astask
, but theTaskfile
is not cross-platform.
-
Install Terraform - https://learn.hashicorp.com/tutorials/terraform/install-cli#install-terraform
-
Install doctl - https://docs.digitalocean.com/reference/doctl/how-to/install/
- List droplet sizes:
doctl compute size list --output json | jq -r "[.[].slug]"
- List droplet regions:
doctl compute region list --output=json | jq '[.[].slug]'
-
Make sure
terraform
is installed. -
Make sure
task
is installed -
Create a file named
.env
and write options:
DO_TOKEN: <secret>
DO_SSH_KEY_NAME: <your_ssh_key_name>
DO_DOMAIN_NAME: lxx.quest
DOMAIN_NAME: test.lxx.quest
DO_SSH_KEY_NAME: charbonnierg@WSL2
DROPLET_SIZE: s-2vcpu-4gb
DROPLET_NAME: lxx-test
DB_CLUSTER_NAME: lxx-test-cluster
DB_CLUSTER_SIZE: db-s-1vcpu-1gb
DO_REGION: ams3
Don't forget to update and review the values. Cost will depend on configured sizes.
The configuration used above has a total cost of 35$ / month:
- droplet: 20$ / month
- database cluster: 15$ / month
Other resources do not have a cost (domain records and firewalls)
- Initialize terraform environment:
task init
- Prepare deployment
task plan
You should see a summary of resources to create in the console:
- Apply deployment
```bash
task apply
- Inspect the droplet state:
task show-droplet
- Connect to the droplet:
task ssh
- Connect using VSCode:
task code
When using WSL, Windows user SSH key must be allowed to connect to remote host instead of Linux user SSH key
Navigate into the tf/
directory and use terraform
CLI directly to:
- provide variables
- plan deployment
- apply deployment
- destroy deployment