A collection of Frequently Asked Questions about Ignite:
No, you can't. Ignite isn't designed for running containers, hence it cannot work as a CRI runtime.
Ignite runs VMs instead. In the future, we envision Ignite (maybe) being able to run VMs (not containers) based off Kubernetes Pods using some special annotations. This would however (most likely) be done as a containerd plugin (lower in the stack than CRI)
Kata Containers, gVisor, and firecracker-containerd run containers, and Ignite runs VMs.
Kata can integrate with Firecracker, but the value add there is more isolation, as the container is spawned inside of a minimal Firecracker VM.
firecracker-containerd enables you to do the same as Kata; add isolation for a container; but this time in a bit more lightweight manner, as a containerd plugin.
gVisor acts as a gatekeeper between your application in a container, and the kernel. gVisor emulates the kernel syscalls, and based on if they are "safe" or not, passes them though to the underlying kernel, or a similar operation. gVisor's value-add is the same as the above, added isolation for containers.
Ignite however, uses the rootfs from an OCI image, and runs that content as a real VM. Inside of the Firecracker VM spawned, there are no extra containers running (unless the user installs a container runtime).
Firecracker is a KVM implementation, and uses KVM to manage and virtualize the VM.
In order to prepare the filesystem, Ignite needs to create a file containing an
ext4 filesystem for Firecracker to boot latest. In order to populate this filesystem
in the file-based block device, Ignite needs to temporarily mount
the filesystem,
and copy the desired root filesystem in. mount
requires the UID to be 0 (root).
We hope to remove this requirement from the Ignite CLI in the future
#24, #33.
However, some part of Ignite (although hidden) will always need to execute as root due
to the need to mount
.
No. Firecracker requires KVM, as per the above, a feature that is not available on MacOS. Technically, you could spin up a VM running Linux inside of a Mac, and inside of that Linux VM, with nested virtualization enabled, run Ignite. However, that might defeat the purpose of Ignite on Mac in the first place.
Docker, currently the only available container runtime usable by Ignite, is used for a couple of reasons:
- Running long-lived processes: At the very early Ignite PoC stage, we tried to run the Firecracker
process under
systemd
, but this was in many ways suboptimal. Attaching to the serial console, fetching logs, and b) and c) were very hard to achieve. Also, we'd need to somehow install the Firecracker binary on host. Packaging everything in a container, and running the Firecracker process in that container was a natural fit. - Sandboxing the Firecracker process: Firecracker should not be run on host without sandboxing, as per their security model. Firecracker provides the jailer binary to do sandboxing/isolation from the host for the Firecracker process, but a container does this equally well, if not better.
- Container Networking: Using containers, we already know what IP to give the VM. We can integrate with
e.g. the default docker bridge, or docker's
libnetwork
in general, or CNI. This reduces the amount of scope and work needed by Ignite, and keeps our implementation lean. It also directly makes Ignite usable alongside normal containers, e.g. on a host running Kubernetes Pods. - OCI compliant operations: Using an existing container runtime, we do not need to implement everything
from the OCI spec ourselves. Instead, we can re-use functionality from the runtime, e.g.
pull
,create
, andexport
.
All in all, we do not want to reinvent the wheel. We reuse what we can from existing proven container tools.
In short, we pull
an OCI image using the container runtime (docker for now), create
a new container using
this image, and finally export
the rootfs of that created container to a tar file. This tar file is then
extracted into the mount point of an ext4-formatted block device file of the OCI image's size. The kernel
OCI image is similarly copied into the rootfs of the container. Lastly, Ignite modifies some well-known files
like /etc/hosts
, and /etc/resolv.conf
to the VM work as you would expect it to.
First, Ignite spawns a container using the runtime. In this container, one Ignite component ignite-spawn
is running.
ignite-spawn
loops the network interfaces inside of the container, and looks for a valid one to use for the VM.
It removes the IP address from the container, and remembers it for later.
Next, ignite-spawn
creates a tap
device which Firecracker will use, and bridges the tap
device with the existing
veth
interface created by the container runtime. With these two interfaces bridged, all information routed to the
container, will end up in the VM's tap
interface.
Lastly, ignite-spawn
spawns the Firecracker process, which starts the VM. The VM is started with the ip=dhcp
kernel
argument, which makes the kernel automatically do a DHCP request for an IP. The kernel asks for an IP to use, and
ignite-spawn
responds with the IP the container initially had.
As per the announcement blog post: https://www.weave.works/blog/fire-up-your-vms-with-weave-ignite
Ignite is a clean room implementation of a project Lucas prototyped while on army service.
Lucas Käldström (@luxas) is a Kubernetes SIG Lead and Top CNCF Ambassador 2017, and is a longstanding member of the Weaveworks family since graduating from High School (story here). As a young Finnish citizen, Lucas had to do his mandatory Military Service for around a year.
Naturally for Lucas, he started evangelising Kubernetes within the military, and got assigned programming tasks. Security and resource consumption are critical army concerns, so Lucas and a colleague, Dennis Marttinen, decided to experiment with Firecracker, creating an elementary version of Ignite. On leaving the army they were granted permission to work on an open source rewrite, working with Weaveworks.