This is a Vagrant Environment for a playing with LinuxKit.
Build and install the Ubuntu Base Box.
Run vagrant up builder --no-destroy-on-error --no-tty
to launch the environment that builds the shared/linuxkit-example.iso
and shared/linuxkit-example-uefi.iso
files.
Run vagrant up bios-iso --no-destroy-on-error --no-tty
to launch shared/linuxkit-example.iso
.
Run vagrant up uefi-iso --no-destroy-on-error --no-tty
to launch shared/linuxkit-example-uefi.iso
.
Then access a linuxkit instance with, e.g.:
vagrant ssh builder
sudo ssh bios-iso # or uefi-iso
exit
You can also launch the iso with one of:
(cd shared && ./linuxkit run qemu -gui -iso linuxkit-example.iso)
(cd shared && ./linuxkit run vbox -gui -iso linuxkit-example.iso)
You can also directly launch the kernel and initrd in qemu:
(cd shared && ./linuxkit run qemu -gui -kernel linuxkit-example)
(cd shared && cp /usr/share/ovmf/OVMF.fd . && ./linuxkit run qemu -gui -uefi -fw ./OVMF.fd -kernel linuxkit-example)
You can list the contents of the initramfs with:
zcat shared/linuxkit-example-initrd.img | cpio --list --numeric-uid-gid --verbose | less
You can execute docker containers with:
# enter the builder machine.
vagrant ssh builder
# switch to root.
sudo -i
# enter the bios-iso machine.
ssh bios-iso
# verify the dockerd configuration.
cat /hostroot/etc/docker/daemon.json
# open a shell in the docker service.
ctr tasks exec --exec-id shell -t docker ash
# execute a docker container.
docker run \
-d \
--restart unless-stopped \
--name hello-docker \
--label worker_id=123 \
alpine:3.14 \
/bin/sh \
-c \
'while true; do echo hello docker $(date); sleep 1; done'
# tail the logs.
# NB at the builder machine, you can tail then with:
# logcli query --tail '{source="hello-docker"}'
docker logs -f hello-docker
# interact with containerd.
export CONTAINERD_ADDRESS=/var/run/docker/containerd/containerd.sock
export CONTAINERD_NAMESPACE=moby
ctr namespaces list
ctr containers list
ctr --namespace plugins.moby containers list
You can read individual logs at /var/log
as separate files. These log files are managed by the logwrite
service.
You can dump (and then follow) all the logs with logread -F
. This reads the logs from the memlogd
managed named socket at /var/run/memlogdq.sock
.
The logs are also sent to the builder
machine.
You can explore them with Grafana at:
You can also explore them with logcli
:
vagrant ssh builder
# list all series/streams.
logcli series '{}' | sort
# list all labels.
logcli labels -q | sort
# list all sources.
logcli labels -q source | sort
# get all the containerd logs.
# NB you might want to add --forward --limit 1000 to see the logs from
# oldest to newer.
logcli query '{source="containerd"}'
# tail all the containerd logs.
logcli query --tail '{source="containerd"}'
# raw tail all the containerd logs.
logcli query --tail --output raw '{source="containerd"}'
# tail all sources.
logcli query --tail --limit 1000 '{source=~".+"}'
# tail all sources looking for errors.
logcli query --tail --limit 1000 '{source=~".+"} |~ "error"'
Available log labels:
Label | Description |
---|---|
host |
hostname of the host that captured the log |
job |
name of the collector that captured the log |
source |
name of the source that produced the log |
Available job label instances:
Job | Description |
---|---|
containerd |
logs read from containerd log files |
container |
logs read from each docker container |
logwrite |
logs read from logwrite generated log files |
Available source label instances:
Source | Description |
---|---|
containerd |
containerd service |
dhcpcd |
dhcpcd service |
docker |
docker service |
hello |
hello service |
hello-docker |
hello-docker docker container |
kmsg |
linux kernel |
memlogd |
memlogd service |
promtail |
promtail service |
rngd |
rngd service |
rngd1 |
rngd onboot service |
sshd |
sshd service |
You can easily capture and see traffic from the host with the wireshark.sh
script, e.g., to capture the traffic from the eth1
interface:
./wireshark.sh builder eth1
This environment can also PXE/iPXE/UEFI-HTTP boot LinuxKit.
To PXE boot a BIOS Virtual Machine with PXE/TFTP/iPXE/HTTP run:
vagrant up bios-pxe --no-destroy-on-error --no-tty
To PXE boot a UEFI Virtual Machine with PXE/TFTP/iPXE/HTTP run:
vagrant up uefi-pxe --no-destroy-on-error --no-tty
To boot Physical Machines you have to:
- Create a Linux Bridge that can reach a Physical Switch that connects to
your Physical Machines.
- This environment assumes you have a setup like rgl/ansible-collection-tp-link-easy-smart-switch.
- To configure it otherwise you must modify the
Vagrantfile
.
- Add your machines to
machines.json
. - Configure your machines to PXE boot.
This was tested on the following physical machines and boot modes:
- Seeed Studio Odyssey X86J4105
- It boots using UEFI/HTTP/PXE.
- HP EliteDesk 800 35W G2 Desktop Mini
- It boots using UEFI/TFTP/PXE.
- This machine can be remotely managed with MeshCommander.
- It was configured as described at rgl/intel-amt-notes.
- LinuxKit
- Linux
- Promtail
- Loki Docker Driver
- Loki:
- iPXE:
- Matchbox:
- Dynamic Host Configuration Protocol (DHCP):
- Building the Simplest Possible Linux System by Rob Landley