-
Notifications
You must be signed in to change notification settings - Fork 3
7. Docker
Docker is a powerful platform for developing, shipping, and running applications in a consistent and efficient manner. It utilizes containerization technology to package an application and all its dependencies into a standardized unit called a "container." These containers can run on any system that supports Docker, ensuring consistency across different environments, such as development, testing, and production.
Docker simplifies the process of creating, deploying, and scaling applications, making it easier for developers and operations teams to work together seamlessly. It provides a lightweight, isolated environment for applications, allowing them to run consistently across various infrastructure setups, from local development machines to cloud-based servers. Docker has become a fundamental tool in modern software development and deployment workflows, enabling faster, more reliable application delivery.
Our docker images are hosted on docker hub. The images come with ros and all dependencies installed.
There are three essential elements to Docker, the Dockerfile, the image, and the container.
A Dockerfile is a plain text configuration file used in Docker, a containerization platform. It contains a set of instructions that define how to build a Docker image. Docker images are lightweight, portable, and self-sufficient packages that include everything needed to run a software application, including the code, runtime, libraries, and system tools. The Dockerfile specifies the base image, sets up the environment, copies files, and defines commands to be executed during the image creation process. Once a Docker image is built using the instructions in the Dockerfile, it can be used to create and run containerized instances of the application in a consistent and reproducible manner across different environments.
A Docker image is a lightweight, standalone, and executable package that includes everything needed to run a software application, such as the code, runtime, libraries, and system tools. It is a snapshot of a filesystem and a set of instructions for running a specific software application. A Docker image is built from a Dockerfile.
A Docker container is an instance of a Docker image. Docker images can be shared to create identical containers. It behaves very similarly to a virtual machine that can accessed only by the terminal.
Docker compose is a tool used to create containers from images. Without docker compose, a docker run
command needs to be issued from the terminal. This is inconvenient because often you need to specify many arguments which can lead to bloated docker run commands. With docker compose, you can specify all the arguments to create a container from an image in a compose.yml file.
Install Docker Desktop here. Make sure you have the Docker Desktop app running when you want to use Docker.
sudo apt install docker.io
After cloning the AUV-2024 repo, enter the docker container. Run docker compose up
. The first time you do this, it will pull the image which may take a while. Every other time after that, it will used the cached image that was already downloaded. The docker compose up
command creates a container using our image and the correct arguments. To stop the container, ctrl-C in the terminal. To delete the container, run docker compose down
. If you image is out of date, delete the container, run docker compose pull
, then docker compose up
.
It is recommended you install the docker and remote development extensions in vs code. This allows you to create vscode windows in the container by navigating to the docker tab in vscode, right clicking on the container, and attaching visual studio code.
This guide will walk you through the steps to ensure your Docker images are compatible across multiple platforms(arm64 and amd64 machines)
- Before proceeding, make sure you're working with the latest version of the image by pulling it from Docker Hub. Replace <image_type> with the appropriate tag (eg:
base
orpoolside
):
docker pull mcgillrobotics/auv_2024:<image_type>
- Next, make sure you have the buildx command installed
(*)
docker buildx version
If its not installed, install it using brew(for macOS)
brew install docker-buildx
Then run (*) again to ensure that buildx is properly installed.
- To create multi-platform images, you need to create a new buildx builder.
docker buildx create --name multi_architecture_builder --use
Ensure it's running correctly by listing all your active containers.
docker ps -a
Then go ahead and pull the latest image again to make sure everything is up to date
docker pull mcgillrobotics/auv_2024:<image_type>
- Now, navigate to the directory where the Dockerfile is located, and run the following command to push build and push the image for both amd64 and arm64
docker buildx build --platform linux/amd64,linux/arm64 -t mcgillrobotics/auv_2024:<image_type> --push .
(replace <image_type> with tag(eg: base
or poolside
)
Now just repeat the process by navigating to the different Dockerfiles inside the Docker directory, pulling the latest images, and repeating step 4 by changing the <image_type>
To use your dedicated gpu in the container, comment out the deploy
section of the compose.yml
file. Install nvidia-container-toolkit
and restart your computer. Now when you docker compose up
, your gpu will be visible in the container. For some reason this breaks gazebo, but gazebo runs well without a gpu. This is useful for training yolo models without having to install ultralytics on your machine.
To test your gpu, open python and run import torch
and torch.cuda.is_available()
.