This repository creates a docker image for serving the Jupyter Lab application from a container hosted on a GPU-accelerated machine which is itself behind a reverse proxy. See Cloud-in-a-Box for a complete deployment stack with authentication and resource monitoring.
WARNING: The entrypoint.sh
script disables the native token-based authentication in the Jupyter Lab application since the resulting base image
is intended to be hosted behind a reverse proxy application with its own authentication and authorization flow.
A version of cudaconda3
is available on docker hub pre-built for CUDA 11.4 and Python 3.9:
docker pull tthebc01/cudaconda3
To build a local image and run:
git clone https://github.com/TtheBC01/nvidia-miniconda.git
cd nvidia-miniconda
docker build -t cudaconda3 .
docker run --name cudaconda --rm -p 8888:8888 -d --gpus all cudaconda3
You should be able to access the Jupyter Lab application from your browser now by going to http://localhost:8888/jupyter/lab.
For proper operation, the cudaconda3 base image must match the CUDA version on the host machine. First, check the version of CUDA installed on the host machine you will be running the container on:
nvidia-smi
Next, the Dockerfile
is configured with two optional build arguments:
MINCONDA
: Default isMiniconda3-py39_4.12.0-Linux-x86_64.sh
, see the Miniconda installer page for more options (you'll need to use a Linux installer though)CUDATAG
: Default is11.4.0-base-ubuntu20.04
, see thenvidia/cuda
Docker Hub repository for more options (stick to Debian based tags)
Therefore, you can customize the build for your needs with a command like the following:
docker build --build-arg MINICONDA=Miniconda3-py37_4.12.0-Linux-x86_64.sh --build-arg CUDATAG=11.7.0-runtime-ubuntu20.04 -t cudaconda .
Depending on the version of CUDA you are using, the installation command will look like:
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch