Let's say that you found an image from the Package List or DockerHub, or built a container - the normal way to run an interactive Docker container on your Jetson using docker run
would be like:
$ sudo docker run --runtime nvidia -it --rm --network=host CONTAINER:TAG
That's actually a rather minimal command, and doesn't have support for displays or other devices, and it doesn't mount the model/data cache (/data
). Once you add everything in, it can get to be a lot to specify by hand. Hence, we have some helpers that provide shortcuts.
The run.sh
launcher forwards its command-line to docker run
, with some added defaults - including the above flags, mounting the /data
cache, and mounting V4L2 and display devices.
$ ./run.sh CONTAINER:TAG # run with --runtime=nvidia, default mounts, ect
$ ./run.sh CONTAINER:TAG my_app --abc xyz # run a command (instead of interactive mode)
$ ./run.sh --volume /path/on/host:/path/in/container CONTAINER:TAG # mount a directory
The flags and arguments to run.sh
are the same as they are to docker run
- anything you specify will be passed along.
To solve the issue of finding a container with package(s) that you want and that's compatible with your version of JetPack/L4T, there's the autotag
tool. It locates a container image for you - either locally, pulled from a registry, or built from source:
$ ./run.sh $(./autotag pytorch) # find pytorch container to run for your version of JetPack/L4T
What's happening here with the $(./autotag xyz)
syntax, is that Bash command substitution expands the full container image name and forwards it to the docker run
command. For example, if you do echo $(./autotag pytorch)
it would print out something like dustynv/pytorch:r35.2.1
(assuming that you don't already have the pytorch
image locally).
You can of course use autotag
interspersed along with other command-line arguments to launch the container:
$ ./run.sh $(./autotag pytorch) my_app --abc xyz # run a command (instead of interactive mode)
$ ./run.sh --volume /path/on/host:/path/in/container $(./autotag pytorch) # mount a directory
Or with using docker run
directly:
$ sudo docker run --runtime nvidia -it --rm --network=host $(./autotag pytorch)
This is the order in which autotag
searches for container images:
- Local images (found under
docker images
) - Pulled from registry (by default
hub.docker.com/u/dustynv
) - Build it from source (it'll ask for confirmation first)
When searching for images, it knows to find containers that are compatible with your version of JetPack-L4T. For example, if you're on JetPack 4.6.x (L4T R32.7.x), you can run images that were built for other versions of JetPack 4.6. Or if you're on JetPack 5.1 (L4T R35), you can run images built for other versions of JetPack 5.1.