Travis CI | Bazel CI |
---|---|
- container_image (example)
- container_bundle (example)
- container_import
- container_load
- container_pull (example)
- container_push (example)
These rules used to be docker_build
, docker_push
, etc. and the aliases for
these (mostly) legacy names still exist largely for backwards-compatibility. We
also have early-stage oci_image
, oci_push
, etc. aliases for folks that
enjoy the consistency of a consistent rule prefix. The only place the
format-specific names currently do any more than alias things is in foo_push
,
where they also specify the appropriate format as which to publish the image.
This repository contains a set of rules for pulling down base images, augmenting them with build artifacts and assets, and publishing those images. These rules do not require / use Docker for pulling, building, or pushing images. This means:
- They can be used to develop Docker containers on Windows / OSX without
boot2docker
ordocker-machine
installed. - They do not require root access on your workstation.
Also, unlike traditional container builds (e.g. Dockerfile), the Docker images
produced by container_image
are deterministic / reproducible.
NOTE: container_push
and container_pull
make use of
google/containerregistry for
registry interactions.
- py_image (signature)
- py3_image (signature)
- nodejs_image (usage)
- java_image (signature)
- war_image (signature)
- scala_image (signature)
- groovy_image (signature)
- cc_image (signature)
- go_image (signature)
- rust_image (signature)
- d_image (signature)
It is notable that: cc_image
, go_image
, rust_image
, and d_image
also allow you to specify an external binary target.
In addition to low-level rules for building containers, this repository
provides a set of higher-level rules for containerizing applications. The idea
behind these rules is to make containerizing an application built via a
lang_binary
rule as simple as changing it to lang_image
.
By default these higher level rules make use of the distroless
language runtimes, but these
can be overridden via the base="..."
attribute (e.g. with a container_pull
or container_image
target).
Add the following to your WORKSPACE
file to add the external repositories:
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "io_bazel_rules_docker",
sha256 = "6dede2c65ce86289969b907f343a1382d33c14fbce5e30dd17bb59bb55bb6593",
strip_prefix = "rules_docker-0.4.0",
urls = ["https://github.com/bazelbuild/rules_docker/archive/v0.4.0.tar.gz"],
)
load(
"@io_bazel_rules_docker//container:container.bzl",
"container_pull",
container_repositories = "repositories",
)
# This is NOT needed when going through the language lang_image
# "repositories" function(s).
container_repositories()
container_pull(
name = "java_base",
registry = "gcr.io",
repository = "distroless/java",
# 'tag' is also supported, but digest is encouraged for reproducibility.
digest = "sha256:deadbeef",
)
Suppose you have a container_image
target //my/image:helloworld
:
container_image(
name = "helloworld",
...
)
You can load this into your local Docker client by running:
bazel run my/image:helloworld
.
For the lang_image
targets, this will also run the
container to maximize compatibility with lang_binary
rules. You can suppress
this behavior by passing the single flag: bazel run :foo -- --norun
Alternatively, you can build a docker load
compatible bundle with:
bazel build my/image:helloworld.tar
. This will produce the file:
bazel-genfiles/my/image/helloworld.tar
, which you can load into
your local Docker client by running:
docker load -i bazel-genfiles/my/image/helloworld.tar
. Building
this target can be expensive for large images.
These work with both container_image
, container_bundle
, and the
lang_image
rules. For everything except
container_bundle
, the image name will be bazel/my/image:helloworld
.
For container_bundle
, it will apply the tags you have specified.
You can use these rules to access private images using standard Docker authentication methods. e.g. to utilize the Google Container Registry credential helper:
$ gcloud components install docker-credential-gcr
$ docker-credential-gcr configure-docker
See also:
A common request from folks using container_push
or container_bundle
is to
be able to vary the tag that is pushed or embedded. There are two options
at present for doing this.
The first option is to use stamp = True
.
# A common pattern when users want to avoid trampling
# on each other's images during development.
container_push(
name = "publish",
format = "Docker",
# Any of these components may have variables.
registry = "gcr.io",
repository = "my-project/my-image",
tag = "{BUILD_USER}",
creation_time = "{BUILD_TIMESTAMP}",
# Trigger stamping.
stamp = True,
)
The next natural question is: "Well what variables can I use?" This
option consumes the workspace-status variables Bazel defines in
stable-status.txt
and volatile-status.txt
. These files will appear
in the target's runfiles:
$ bazel build //docker/testdata:push_stamp
...
$ cat bazel-bin/docker/testdata/push_stamp.runfiles/io_bazel_rules_docker/stable-status.txt
BUILD_EMBED_LABEL
BUILD_HOST bazel
BUILD_USER mattmoor
$ cat bazel-bin/docker/testdata/push_stamp.runfiles/io_bazel_rules_docker/volatile-status.txt
BUILD_TIMESTAMP 1498740967769
You can augment these variables via --workspace_status_command
,
including through the use of .bazelrc
.
The second option is to employ Makefile
-style variables:
container_bundle(
name = "bundle",
images = {
"gcr.io/$(project)/frontend:latest": "//frontend:image",
"gcr.io/$(project)/backend:latest": "//backend:image",
}
)
These variables are specified on the CLI using:
bazel build --define project=blah //path/to:bundle
By default the lang_image
rules use the distroless
base runtime images,
which are optimized to be the minimal set of things your application needs
at runtime. That can make debugging these containers difficult because they
lack even a basic shell for exploring the filesystem.
To address this, we publish variants of the distroless
runtime images tagged
:debug
, which are the exact-same images, but with additions such as busybox
to make debugging easier.
For example (in this repo):
$ bazel run -c dbg testdata:go_image
...
INFO: Build completed successfully, 5 total actions
INFO: Running command line: bazel-bin/testdata/go_image
Loaded image ID: sha256:9c5c2167a1db080a64b5b401b43b3c5cdabb265b26cf7a60aabe04a20da79e24
Tagging 9c5c2167a1db080a64b5b401b43b3c5cdabb265b26cf7a60aabe04a20da79e24 as bazel/testdata:go_image
Hello, world!
$ docker run -ti --rm --entrypoint=sh bazel/testdata:go_image -c "echo Hello, busybox."
Hello, busybox.
container_image(
name = "app",
# References container_pull from WORKSPACE (above)
base = "@java_base//image",
files = ["//java/com/example/app:Hello_deploy.jar"],
cmd = ["Hello_deploy.jar"]
)
To use cc_image
, add the following to WORKSPACE
:
load(
"@io_bazel_rules_docker//cc:image.bzl",
_cc_image_repos = "repositories",
)
_cc_image_repos()
Then in your BUILD
file, simply rewrite cc_binary
to cc_image
with the
following import:
load("@io_bazel_rules_docker//cc:image.bzl", "cc_image")
cc_image(
name = "cc_image",
srcs = ["cc_image.cc"],
deps = [":cc_image_library"],
)
To use cc_image
(or go_image
, d_image
, rust_image
) with an external
cc_binary
(or the like) target, then your BUILD
file should instead look
like:
load("@io_bazel_rules_docker//cc:image.bzl", "cc_image")
cc_binary(
name = "cc_binary",
srcs = ["cc_binary.cc"],
deps = [":cc_library"],
)
cc_image(
name = "cc_image",
binary = ":cc_binary",
)
To use py_image
, add the following to WORKSPACE
:
load(
"@io_bazel_rules_docker//python:image.bzl",
_py_image_repos = "repositories",
)
_py_image_repos()
Then in your BUILD
file, simply rewrite py_binary
to py_image
with the
following import:
load("@io_bazel_rules_docker//python:image.bzl", "py_image")
py_image(
name = "py_image",
srcs = ["py_image.py"],
deps = [":py_image_library"],
main = "py_image.py",
)
For Python and Java's lang_image
rules, you can factor
dependencies that don't change into their own layers by overriding the
layers=[]
attribute. Consider this sample from the rules_k8s
repository:
py_image(
name = "server",
srcs = ["server.py"],
# "layers" is just like "deps", but it also moves the dependencies each into
# their own layer, which can dramatically improve developer cycle time. For
# example here, the grpcio layer is ~40MB, but the rest of the app is only
# ~400KB. By partitioning things this way, the large grpcio layer remains
# unchanging and we can reduce the amount of image data we repush by ~99%!
layers = [
requirement("grpcio"),
"//examples/hellogrpc/proto:py",
],
main = "server.py",
)
To use a Python 3 runtime instead of the default of Python 2, use py3_image
,
instead of py_image
. The other semantics are identical.
It is notable that unlike the other image rules, nodejs_image
is not
currently using the gcr.io/distroless/nodejs
image for a handful of reasons.
This is a switch we plan to make, when we can manage it. We are currently
utilizing the gcr.io/google-appengine/debian9
image as our base.
To use nodejs_image
, add the following to WORKSPACE
:
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "build_bazel_rules_nodejs",
# Replace with a real SHA256 checksum
sha256 = "{SHA256}"
# Replace with a real commit SHA
strip_prefix = "rules_nodejs-{HEAD}",
urls = ["https://github.com/bazelbuild/rules_nodejs/archive/{HEAD}.tar.gz"],
)
load("@build_bazel_rules_nodejs//:defs.bzl", "node_repositories", "npm_install")
# Download Node toolchain, etc.
node_repositories(package_json = ["//:package.json"])
# Install your declared Node.js dependencies
npm_install(
name = "npm_deps",
package_json = "//:package.json",
)
# Download base images, etc.
load(
"@io_bazel_rules_docker//nodejs:image.bzl",
_nodejs_image_repos = "repositories",
)
_nodejs_image_repos()
Then in your BUILD
file, simply rewrite nodejs_binary
to nodejs_image
with
the following import:
load("@io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name = "nodejs_image",
entry_point = "your_workspace/path/to/file.js",
# This will be put into its own layer.
node_modules = "@npm_deps//:node_modules",
data = [":file.js"],
...
)
To use go_image
, add the following to WORKSPACE
:
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
# You *must* import the Go rules before setting up the go_image rules.
http_archive(
name = "io_bazel_rules_go",
# Replace with a real SHA256 checksum
sha256 = "{SHA256}"
# Replace with a real commit SHA
strip_prefix = "rules_go-{HEAD}",
urls = ["https://github.com/bazelbuild/rules_go/archive/{HEAD}.tar.gz"],
)
load(
"@io_bazel_rules_docker//go:image.bzl",
_go_image_repos = "repositories",
)
_go_image_repos()
Then in your BUILD
file, simply rewrite go_binary
to go_image
with the
following import:
load("@io_bazel_rules_docker//go:image.bzl", "go_image")
go_image(
name = "go_image",
srcs = ["main.go"],
importpath = "github.com/your/path/here",
goarch = "amd64",
goos = "linux",
pure = "on",
)
Notice that it is important to explicitly specify goarch
, goos
, and pure
as the binary should be built for Linux since it will run on a Linux container.
To use a custom base image, with any of the lang_image
rules, you can override the default base="..."
attribute. Consider this
modified sample from the distroless
repository:
load("@bazel_tools//tools/build_defs/pkg:pkg.bzl", "pkg_tar")
# Create a passwd file with a root and nonroot user and uid.
passwd_entry(
username = "root",
uid = 0,
gid = 0,
name = "root_user",
)
passwd_entry(
username = "nonroot",
info = "nonroot",
uid = 1002,
name = "nonroot_user",
)
passwd_file(
name = "passwd",
entries = [
":root_user",
":nonroot_user",
],
)
# Create a tar file containing the created passwd file
pkg_tar(
name = "passwd_tar",
srcs = [":passwd"],
mode = "0644",
package_dir = "etc",
)
# Include it in our base image as a tar.
container_image(
name = "passwd_image",
base = "@go_image_base//image",
tars = [":passwd_tar"],
user = "nonroot",
)
# Simple go program to print out the username and uid.
go_image(
name = "user",
srcs = ["user.go"],
# Override the base image.
base = ":passwd_image",
)
To use java_image
, add the following to WORKSPACE
:
load(
"@io_bazel_rules_docker//java:image.bzl",
_java_image_repos = "repositories",
)
_java_image_repos()
Then in your BUILD
file, simply rewrite java_binary
to java_image
with the
following import:
load("@io_bazel_rules_docker//java:image.bzl", "java_image")
java_image(
name = "java_image",
srcs = ["Binary.java"],
# Put these runfiles into their own layer.
layers = [":java_image_library"],
main_class = "examples.images.Binary",
)
To use war_image
, add the following to WORKSPACE
:
load(
"@io_bazel_rules_docker//java:image.bzl",
_java_image_repos = "repositories",
)
_java_image_repos()
Then in your BUILD
file, simply rewrite java_war
to war_image
with the
following import:
load("@io_bazel_rules_docker//java:image.bzl", "war_image")
war_image(
name = "war_image",
srcs = ["Servlet.java"],
# Put these JARs into their own layers.
layers = [
":java_image_library",
"@javax_servlet_api//jar:jar",
],
)
To use scala_image
, add the following to WORKSPACE
:
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
# You *must* import the Scala rules before setting up the scala_image rules.
http_archive(
name = "io_bazel_rules_scala",
# Replace with a real SHA256 checksum
sha256 = "{SHA256}"
# Replace with a real commit SHA
strip_prefix = "rules_scala-{HEAD}",
urls = ["https://github.com/bazelbuild/rules_scala/archive/{HEAD}.tar.gz"],
)
load("@io_bazel_rules_scala//scala:scala.bzl", "scala_repositories")
scala_repositories()
load(
"@io_bazel_rules_docker//scala:image.bzl",
_scala_image_repos = "repositories",
)
_scala_image_repos()
Then in your BUILD
file, simply rewrite scala_binary
to scala_image
with the
following import:
load("@io_bazel_rules_docker//scala:image.bzl", "scala_image")
scala_image(
name = "scala_image",
srcs = ["Binary.scala"],
main_class = "examples.images.Binary",
)
To use groovy_image
, add the following to WORKSPACE
:
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
# You *must* import the Groovy rules before setting up the groovy_image rules.
http_archive(
name = "io_bazel_rules_groovy",
# Replace with a real SHA256 checksum
sha256 = "{SHA256}"
# Replace with a real commit SHA
strip_prefix = "rules_groovy-{HEAD}",
urls = ["https://github.com/bazelbuild/rules_groovy/archive/{HEAD}.tar.gz"],
)
load("@io_bazel_rules_groovy//groovy:groovy.bzl", "groovy_repositories")
groovy_repositories()
load(
"@io_bazel_rules_docker//groovy:image.bzl",
_groovy_image_repos = "repositories",
)
_groovy_image_repos()
Then in your BUILD
file, simply rewrite groovy_binary
to groovy_image
with the
following import:
load("@io_bazel_rules_docker//groovy:image.bzl", "groovy_image")
groovy_image(
name = "groovy_image",
srcs = ["Binary.groovy"],
main_class = "examples.images.Binary",
)
To use rust_image
, add the following to WORKSPACE
:
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
# You *must* import the Rust rules before setting up the rust_image rules.
http_archive(
name = "io_bazel_rules_rust",
# Replace with a real SHA256 checksum
sha256 = "{SHA256}"
# Replace with a real commit SHA
strip_prefix = "rules_rust-{HEAD}",
urls = ["https://github.com/bazelbuild/rules_rust/archive/{HEAD}.tar.gz"],
)
load("@io_bazel_rules_rust//rust:repositories.bzl", "rust_repositories")
rust_repositories()
load(
"@io_bazel_rules_docker//rust:image.bzl",
_rust_image_repos = "repositories",
)
_rust_image_repos()
Then in your BUILD
file, simply rewrite rust_binary
to rust_image
with the
following import:
load("@io_bazel_rules_docker//rust:image.bzl", "rust_image")
rust_image(
name = "rust_image",
srcs = ["main.rs"],
)
To use d_image
, add the following to WORKSPACE
:
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
# You *must* import the D rules before setting up the d_image rules.
http_archive(
name = "io_bazel_rules_d",
# Replace with a real SHA256 checksum
sha256 = "{SHA256}"
# Replace with a real commit SHA
strip_prefix = "rules_d-{HEAD}",
urls = ["https://github.com/bazelbuild/rules_d/archive/{HEAD}.tar.gz"],
)
load("@io_bazel_rules_d//d:d.bzl", "d_repositories")
d_repositories()
load(
"@io_bazel_rules_docker//d:image.bzl",
_d_image_repos = "repositories",
)
_d_image_repos()
Then in your BUILD
file, simply rewrite d_binary
to d_image
with the
following import:
load("@io_bazel_rules_docker//d:image.bzl", "d_image")
d_image(
name = "d_image",
srcs = ["main.d"],
)
NOTE: all application image rules support the
args
string_list attribute. If specified, they will be appended directly after the container ENTRYPOINT binary name.
container_bundle(
name = "bundle",
images = {
# A set of images to bundle up into a single tarball.
"gcr.io/foo/bar:bazz": ":app",
"gcr.io/foo/bar:blah": "//my:sidecar",
"gcr.io/foo/bar:booo": "@your//random:image",
}
)
In WORKSPACE
:
container_pull(
name = "base",
registry = "gcr.io",
repository = "my-project/my-base",
# 'tag' is also supported, but digest is encouraged for reproducibility.
digest = "sha256:deadbeef",
)
This can then be referenced in BUILD
files as @base//image
.
This target pushes on bazel run :push_foo
:
container_push(
name = "push_foo",
image = ":foo",
format = "Docker",
registry = "gcr.io",
repository = "my-project/my-image",
tag = "dev",
)
We also support the docker_push
(from docker/docker.bzl
) and oci_push
(from oci/oci.bzl
) aliases, which bake in the format = "..."
attribute.
In WORKSPACE
:
container_pull(
name = "official_ubuntu",
registry = "index.docker.io",
repository = "library/ubuntu",
tag = "14.04",
)
This can then be referenced in BUILD
files as @official_ubuntu//image
.
In WORKSPACE
:
container_pull(
name = "etcd",
registry = "quay.io",
repository = "coreos/etcd",
tag = "latest",
)
This can then be referenced in BUILD
files as @etcd//image
.
In WORKSPACE
:
container_pull(
name = "artifactory",
registry = "docker.bintray.io",
repository = "jfrog/artifactory-pro",
)
This can then be referenced in BUILD
files as @artifactory//image
.
In WORKSPACE
:
container_pull(
name = "gitlab",
registry = "registry.gitlab.com",
repository = "username/project/image",
tag = "tag",
)
This can then be referenced in BUILD
files as @gitlab//image
.
NOTE: This will only work on systems with Python >2.7.6
The digest references to the distroless
base images must be updated over time
to pick up bug fixes and security patches. To facilitate this, the files
containing the digest references are generated by tools/update_deps.py
. To
update all of the dependencies, please run (from the root of the repository):
./update_deps.sh
Image references should not be update individually because these images have shared layers and letting them diverge could result in sub-optimal push and pull performance.
container_pull(name, registry, repository, digest, tag)
A repository rule that pulls down a Docker base image in a manner suitable for
use with container_image
's base
attribute.
NOTE: Set PULLER_TIMEOUT
env variable to change the default 600s timeout.
Attributes | |
---|---|
name |
Unique name for this repository rule. |
registry |
The registry from which to pull the base image. |
repository |
The `repository` of images to pull from. |
digest |
The `digest` of the Docker image to pull from the specified `repository`. Note: For reproducible builds, use of `digest` is recommended. |
tag |
The `tag` of the Docker image to pull from the specified `repository`. If neither this nor `digest` is specified, this attribute defaults to `latest`. If both are specified, then `tag` is ignored. Note: For reproducible builds, use of `digest` is recommended. |
container_push(name, image, registry, repository, tag)
An executable rule that pushes a Docker image to a Docker registry on bazel run
.
Attributes | |
---|---|
name |
Unique name for this rule. |
format |
The desired format of the published image. Currently, this supports
|
image |
The label containing a Docker image to publish. |
registry |
The registry to which to publish the image. This field supports stamp variables. |
repository |
The `repository` of images to which to push. This field supports stamp variables. |
tag |
The `tag` of the Docker image to push to the specified `repository`. This attribute defaults to `latest`. This field supports stamp variables. |
stamp |
If true, enable use of workspace status variables
(e.g. These fields are specified in the tag using using Python format
syntax, e.g.
|
container_layer(data_path, directory, files, mode, tars, debs, symlinks, env)
A rule that assembles data into a tarball which can be use as in layers
attr in container_image
rule.
Implicit output targets | |
---|---|
name-layer.tar |
A tarball of current layer
A data tarball corresponding to the layer. |
Attributes | |
---|---|
name |
Name, required
A unique name for this rule. |
data_path |
String, optional
Root path of the files. The directory structure from the files is preserved inside the Docker image, but a prefix path determined by `data_path` is removed from the directory structure. This path can be absolute from the workspace root if starting with a `/` or relative to the rule's directory. A relative path may starts with "./" (or be ".") but cannot use go up with "..". By default, the `data_path` attribute is unused, and all files should have no prefix. |
directory |
String, optional
Target directory. The directory in which to expand the specified files, defaulting to '/'. Only makes sense accompanying one of files/tars/debs. |
files |
List of files, optional
File to add to the layer. A list of files that should be included in the Docker image. |
mode |
String, default to 0555
Set the mode of files added by the |
tars |
List of files, optional
Tar file to extract in the layer. A list of tar files whose content should be in the Docker image. |
debs |
List of files, optional
Debian package to install. A list of debian packages that will be installed in the Docker image. |
symlinks |
Dictionary, optional
Symlinks to create in the Docker image.
|
env |
Dictionary from strings to strings, optional
Dictionary from environment variable names to their values when running the Docker image.
The values of this field support stamp variables. |
container_image(name, base, data_path, directory, files, legacy_repository_naming, mode, tars, debs, symlinks, entrypoint, cmd, creation_time, env, labels, ports, volumes, workdir, layers, repository)
Implicit output targets | |
---|---|
name.tar |
The full Docker image
A full Docker image containing all the layers, identical to
what |
name-layer.tar |
An image of the current layer
A Docker image containing only the layer corresponding to that target. It is used for incremental loading of the layer. Note: this target is not suitable for direct consumption. It is used for incremental loading and non-docker rules should depends on the Docker image (name.tar) instead. |
name |
Incremental image loader
The incremental image loader. It will load only changed layers inside the Docker registry. |
Attributes | |
---|---|
name |
Name, required
A unique name for this rule. |
base |
File, optional
The base layers on top of which to overlay this layer, equivalent to FROM. |
data_path |
String, optional
Root path of the files. The directory structure from the files is preserved inside the Docker image, but a prefix path determined by `data_path` is removed from the directory structure. This path can be absolute from the workspace root if starting with a `/` or relative to the rule's directory. A relative path may starts with "./" (or be ".") but cannot use go up with "..". By default, the `data_path` attribute is unused, and all files should have no prefix. |
directory |
String, optional
Target directory. The directory in which to expand the specified files, defaulting to '/'. Only makes sense accompanying one of files/tars/debs. |
files |
List of files, optional
File to add to the layer. A list of files that should be included in the Docker image. |
legacy_repository_naming |
Bool, default to False
Whether to use the legacy strategy for setting the repository name
embedded in the resulting tarball.
e.g. |
mode |
String, default to 0555
Set the mode of files added by the |
tars |
List of files, optional
Tar file to extract in the layer. A list of tar files whose content should be in the Docker image. |
debs |
List of files, optional
Debian package to install. A list of debian packages that will be installed in the Docker image. |
symlinks |
Dictionary, optional
Symlinks to create in the Docker image.
|
user |
String, optional
The user that the image should run as. Because building the image never happens inside a Docker container, this user does not affect the other actions (e.g., adding files). This field supports stamp variables. |
entrypoint |
String or string list, optional
List of entrypoints to add in the image. This field supports stamp variables. |
cmd |
String or string list, optional
List of commands to execute in the image. This field supports stamp variables. |
creation_time |
String, optional, default to {BUILD_TIMESTAMP} when stamp = True, otherwise 0
The image's creation timestamp. Acceptable formats: Integer or floating point seconds since Unix Epoch, RFC 3339 date/time. This field supports stamp variables. |
env |
Dictionary from strings to strings, optional
Dictionary from environment variable names to their values when running the Docker image.
The values of this field support stamp variables. |
labels |
Dictionary from strings to strings, optional
The values of this field support stamp variables. |
ports |
String list, optional
|
volumes |
String list, optional
|
workdir |
String, optional
Initial working directory when running the Docker image. Because building the image never happens inside a Docker container, this working directory does not affect the other actions (e.g., adding files). This field supports stamp variables. |
layers |
Label list, optional
List of `container_layer` targets. The data from each `container_layer` will be part of container image, and the environment variable will be available in the image as well. |
repository |
String, default to `bazel`
The repository for the default tag for the image. Images generated by `container_image` are tagged by default to `bazel/package_name:target` for a `container_image` target at `//package/name:target`. Setting this attribute to `gcr.io/dummy` would set the default tag to `gcr.io/dummy/package_name:target`. |
stamp |
If true, enable use of workspace status variables
(e.g. These fields are specified in attributes using using Python format
syntax, e.g. |
container_bundle(name, images)
A rule that aliases and saves N images into a single docker save
tarball.
Attributes | |
---|---|
name |
Unique name for this rule. |
images |
A collection of the images to save into the tarball. The keys are the tags with which to alias the image specified by the
value. These tags may contain make variables ( The values may be the output of |
stamp |
If true, enable use of workspace status variables
(e.g. These fields are specified in the tag using using Python format
syntax, e.g.
|
container_import(name, config, layers)
A rule that imports a docker image into our intermediate form.
Attributes | |
---|---|
name |
Unique name for this rule. |
config |
A json configuration file containing the image's metadata. This appears in `docker save` tarballs as `.json` and is referenced by `manifest.json` in the config field. |
layers |
The list of layer |
container_load(name, file)
A repository rule that examines the contents of a docker save
tarball and
creates a container_import
target. The created target can be referenced as
@label_name//image
.
Attributes | |
---|---|
name |
Unique name for this rule. |
file |
A label targetting a single file which is a compressed or uncompressed tar, as obtained through `docker save IMAGE`. |