-
Notifications
You must be signed in to change notification settings - Fork 28
Building VM images using esdc factory
Let's start with some theory:
- An image is a copy of virtual machine's disk. It is used to quickly spawn virtual machines without the need to install an operating system or configure services.
- In the case of Danube Cloud (and other SmartOS-related software), an image is a ZFS data stream (usually compressed) + an image manifest (metadata).
- Images are served by an image server (usually via HTTP(S)) and imported directly to a compute node's zpool. The image server should be compatible with Image API (IMGAPI).
- Each Danube Cloud installation uses by default its local (intermediate) image server (esdc-shipment).
- Images used in Danube Cloud are compatible with images from https://images.joyent.com or http://datasets.at.
At Erigones, we are building images daily. Mainly because some images are part of our first compute node USB image. And of course, we do this automatically by using our Ansible-based project - esdc-factory. The hardest thing when creating images is that you have to take care of them and release new versions periodically. That's why automation is so important here. And the best thing with esdc-factory is that by using Ansible roles and tasks, image build scripts can share a lot of code and the code can be re-used for making other images as well.
The prerequisites are listed in the README.rst in the esdc-factory repo, but let's quickly go through them. You will need one or two network-connected Linux/Unix machines:
- buildnode remote host - this must be a SmartOS or Danube Cloud compute node capable of running KVM machines and/or zones (depends on what kind of images are you going to build). If you are going to use SmartOS, please make sure that you have Python installed and available on the node.
-
builder local host - this is a Linux/Unix machine that has the esdc-factory repo checked out. This can be the same machines as the buildnode. The following software must be installed on the system:
- git
- Ansible >= 2.0
- GNU make
- sshpass
- OpenSSH client
- a working ssh-agent with loaded
build_ssh_key
(for running git clone on the remote host, see below) - a running web server servingbuild_base_url
(this can be a simple web server, see below)
You don't have to configure anything on the buildnode; all configuration steps are performed on builder host:
[user@builder ~]$ mkdir data; cd data
[user@builder ~/data]$ python -m SimpleHTTPServer 8000
[user@builder ~]$ ssh-keygen -t rsa; eval "$(ssh-agent)"; ssh-add
[user@builder ~]$ git clone https://github.com/erigones/esdc-factory; cd esdc-factory/etc
[user@builder ~/esdc-factory/etc]$ cp hosts.sample.cfg hosts.cfg
[user@builder ~/esdc-factory/etc]$ cp config.sample.yml config.yml
Let's edit both configuration files:
etc/hosts.cfg
This is file has only three lines (including the [build]
configuration group). You have to set the IP addresses of builder and buildnode and optionally set the ansible_python_interpreter if Python is in a non-standard PATH.
[build]
builder ansible_ssh_host=127.0.0.1 ansible_connection=local
buildnode ansible_ssh_host=127.0.0.1 ansible_python_interpreter=/opt/local/bin/python
etc/config.yml You have to adjust these configuration variables to reflect your reality:
-
build_base_url: 'http://192.168.23.100:8000'
- URL pointing to the web server you have previously configured -
build_base_dir: '/home/user/data'
- Full path to the doc root directory served by the web server -
build_ssh_key: 'ssh-rsa blabla user@host'
- The SSH key on builder -
build_image_password: 'passw0rd'
- Password set for the root user in base images build_disk_compression: lz4
-
build_nic_tag: admin
- NIC tag of the interface on buildhost which will be used by VMs to access the network- The network configuration below depends on your network setup
-
build_gateway: 192.168.23.1
- Network gateway of VMs for the time of image building -
build_netmask: 255.255.255.0
- Network mask of VMs for the time of image building build_resolvers: [ '8.8.8.8', '8.8.4.4']
-
build_ips:
- This is a dictionary, which can be used to configure a custom IP address for every built VM image. It has to be here even if it is empty -
build_ip: 192.168.23.42
- Default IP address of every VM -
build_vnc_ports:
- This is a dictionary, which can be used to configure a custom VNC port for every built VM image. It has to be here even if it is empty -
build_vnc_port: 60000
- Default VNC port of every virtual machine
The main part of the esdc-factory repo is the ansible
directory, which contains all the build playbooks, roles, tasks, and variables. For now, let's just test whether everything is working fine. There is a comfortable script and Makefile for running the build playbooks. Just type make help
to see all options:
[user@builder ~/esdc-factory]$ make help
First, you have to initialize the build_base_dir
directory structure:
[user@builder ~/esdc-factory]$ make init
Now you can build an image. Let us test it by building a KVM CentOS 7 base image:
[user@builder ~/esdc-factory]$ make base-centos-7
The build files will be prefixed with the word "contrib" to indicate the fact that this is an image contributed by a user and not connected with other build logic (e.g. not related to building the Danube Cloud USB image).
This is the list of newly created or affected files:
Let's start by writing some documentation for our new image. The documentation should include a description of the image and a list of supported metadata along with their description. Something like this: docs/contrib/gitlab-ce.rst.
The main playbook is located in the ansible
directory, and the file should be prefixed with the word "build". Our playbook will be called build-contrib-gitlab-ce.yml. Every playbook for creating images is divided into four parts also called plays:
Although, this play is not required it is recommended to keep it here. There are tasks that will do some basic checks i.e. check whether the build web server is reachable and whether the build_base_dir
is configured correctly.
- name: Check builder host
hosts: builder
tasks:
- include: tasks/build/check.yml
when: skip_check is not defined or not skip_check
This play will create a VM on buildnode and register it in the running Ansible playbook under a specific name (the hostname
parameter). The pre_tasks
section includes tasks that will make sure that a base image is installed on the buildnode and remove an old VM, which may still exist on the buildnode from a previously failed build.
- name: Create virtual machine
hosts: buildnode
vars_files:
- vars/build/vm/contrib-gitlab-ce.yml
pre_tasks:
- include: tasks/build/cleanup.yml
- include: tasks/build/prepare-base-image.yml
roles:
- smartos-vm
tasks:
- include: tasks/build/centos/register-host.yml
hostname=contrib-gitlab-ce
The VM parameters should be configured in the ansible/vars/build/vm/contrib-gitlab-ce.yml file. These parameters start with the zone_
prefix and are used by the smartos-vm
role. You will get the idea by looking at other VM vars files in the ansible/vars/build/vm folder.
This is the main play that runs tasks in the VM created in the 2nd play. You should include all modifications and configuration stuff into this play. The last role here should be vm-image
. This role runs a script inside the VM, which will clean up the VM and prepare it for a snapshot that will be used for creating the final image.
- name: Install and configure appliance
hosts: contrib-gitlab-ce
gather_facts: true
vars_files:
- vars/build/os/contrib-gitlab-ce.yml
roles:
- esdc-common
- selinux
- zabbix-agent
- cloud-init
- rc-scripts
- iptables
- mdata-client
- qemu-guest-agent
- contrib-gitlab-ce
- passwords
- vm-image
If a role requires some variables to be set, then these should go into the ansible/vars/build/os/contrib-gitlab-ce.yml file. For example, we will add a gitlab_ce_version
and gitlab_ce_checksum
variables here; these variables will be used by our new role - contrib-gitlab-ce
.
The role contrib-gitlab-ce is based on the official GitLab CE install instructions and does the following:
- installs all required packages;
- downloads and installs GitLab CE;
- installs an es-post-deploy.sh script.
You can see the details in the ansible/roles/contrib-gitlab-ce/tasks/main.yml task file, but let's examine the es-post-deploy.sh script installed by the last task. The script will be installed into /var/lib/rc-scripts
and run by the systemd rc-scripts.service during every VM boot. The script reads the VM metadata and uses them to configure the VM and services accordingly. During the initial VM boot, the script will perform the following operations:
- update /root/.ssh/authorized_keys according to the root_authorized_keys metadata;
- generate a self-signed SSL certificate;
- update zabbix_agentd.conf according to the org.erigones:zabbix_ip metadata;
- configure GitLab based on the gitlab:external_url metadata.
The last operation - GitLab configuration - will be performed during every VM boot. The es-post-deploy.sh is a simple script and configures just a few things. There are many other configuration options that can be included in such scripts to automate the deployment of new VMs. This also means that the power and usability of a VM image is related to scripts like this.
This final play creates image and metadata files on buildnode and copies them to the builder host. The image name and other metadata are configured in the already mentioned ansible/vars/build/vm/contrib-gitlab-ce.yml file (image_name
, image_desc
, image_homepage
and builder_dir
variables).
- name: Create and save image
hosts: buildnode
vars_files:
- vars/build/vm/contrib-gitlab-ce.yml
- vars/build/os/contrib-gitlab-ce.yml
vars:
image_tags: {internal: false, resize: true, deploy: false}
tasks:
- include: tasks/build/centos/create-image.yml
In order to use the convenient Makefile, the contrib-gitlab-ce
target must be added to the BUILD_TARGETS
list at the begging of the file. That's all. Let's run it:
[user@builder ~/esdc-factory]$ make contrib-gitlab-ce
You can set the VERBOSE
environment variable to make Ansible more verbose. This may come handy if you need to debug your tasks.
Homepage | User Guide | API Reference | Wiki