Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial batch of changes from Matt Johnson #18

Merged
merged 7 commits into from
Jan 14, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 47 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@
This is a collection of Ansible playbooks, Terraform configurations and scripts to deploy and operate Incus clusters.

## How to get the test setup run:
### Install incus and OpenTofu
Install incus stable or LTS on your system from the [zabbly/incus](https://github.com/zabbly/incus) release and initialize it on your local machine.
### Install Incus and OpenTofu
Install Incus stable or LTS on your system from the [zabbly/incus](https://github.com/zabbly/incus) release and initialize it on your local machine.

Install [OpenTofu](https://opentofu.org/docs/intro/install/).

Install the required ceph packages for ansible on the controller, on Debian that's the `ceph-base` and `ceph-common` packages:
Install the required ceph packages for Ansible on the controller, on Debian that's the `ceph-base` and `ceph-common` packages:
```
apt install --no-install-recommends ceph-base ceph-common
```
Expand All @@ -24,9 +24,13 @@ Init the terraform project:
tofu init
```

Create the VMs for testing:
Create 5 VMs and associated networks and storage volumes for testing an Incus cluster:
If your Incus host needs different values from the default, you may need
to copy `terraform.tfvars.example` to `terraform.tfvars` and update the
variables.

```
tofu apply
tofu apply -target=module.baremetal
```

### Run the Ansible Playbook
Expand All @@ -35,22 +39,58 @@ Go to the ansible directory:
cd ../ansible/
```

NOTE: If you need the same version of Ansible this was tested with:
```
pyenv install 3.13.1
pipenv --python "3.13.1" install
pipenv shell
ansible-galaxy install -r ansible_requirements.yml
```

Copy the example inventory file:
```
cp hosts.yaml.example hosts.yaml
```
NOTE: If you are connecting to a remote Incus host you will need to change the `ansible_incus_remote` variable to match the name of the Incus remote (see: `incus remote list` for a list of remote names to use).

Run the Playbooks:
```
ansible-playbook deploy.yaml
```

NOTE: When re-deploying the same cluster (e.g. following a `terraform
destroy`), you need to make sure to also clear any local state from the
NOTE: When re-deploying the same cluster (e.g. following a `terraform destroy`),
you need to make sure to also clear any local state from the
`data` directory, failure to do so will cause Ceph/OVN to attempt
connection to the previously deployed systems which will cause the
deployment to get stuck.

```
rm ansible/data/ceph/*
rm ansible/data/lvmcluster/*
rm ansible/data/ovn/*
```

### Test a VM and Contrainer on the new Incus cluster

```
# Open a shell on one of the Incus cluster nodes
incus exec server01 bash

# List all instances
incus list

# Launch a system container
incus launch images:ubuntu/22.04 ubuntu-container

# Launch a virtual machine
incus launch images:ubuntu/22.04 ubuntu-vm --vm

# Launch an application container
incus remote add oci-docker https://docker.io --protocol=oci
incus launch oci-docker:hello-world --ephemeral --console
incus launch oci-docker:nginx nginx-app-container
```

## Deploying against production systems
### Requirements (when using Incus with both Ceph and OVN)

Expand Down
1 change: 1 addition & 0 deletions ansible/.gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
data/*
hosts.yaml
Pipfile.lock
14 changes: 14 additions & 0 deletions ansible/Pipfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"

[packages]
jmespath = "*"
ansible-core = "==2.18.1"

[dev-packages]

[requires]
python_version = "3.13"
python_full_version = "3.13.1"
9 changes: 9 additions & 0 deletions ansible/ansible_requirements.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
# Install Collections and Roles with Ansible Galaxy
# ansible-galaxy install -r ansible_requirements.yml

collections:
- name: community.crypto
- name: community.general

roles:
158 changes: 79 additions & 79 deletions ansible/books/ceph.yaml
Original file line number Diff line number Diff line change
@@ -1,83 +1,4 @@
---
- name: Ceph - Generate cluster keys and maps
hosts: all
order: shuffle
gather_facts: yes
gather_subset:
- "default_ipv4"
- "default_ipv6"
vars:
task_fsid: "{{ ceph_fsid | default('') }}"
task_bootstrap_osd_keyring: ../data/ceph/cluster.{{ task_fsid }}.bootstrap-osd.keyring
task_client_admin_keyring: ../data/ceph/cluster.{{ task_fsid }}.client.admin.keyring
task_mon_keyring: ../data/ceph/cluster.{{ task_fsid }}.mon.keyring
task_mon_map: ../data/ceph/cluster.{{ task_fsid }}.mon.map
task_release: "{{ ceph_release | default('squid') }}"
task_roles: "{{ ceph_roles | default([]) }}"

task_release_majors:
luminous: 12
mimic: 13
nautilus: 14
octopus: 15
pacific: 16
quincy: 17
reef: 18
squid: 19
any_errors_fatal: true
tasks:
- name: Generate mon keyring
delegate_to: 127.0.0.1
shell:
cmd: ceph-authtool --create-keyring {{ task_mon_keyring }} --gen-key -n mon. --cap mon 'allow *'
creates: '{{ task_mon_keyring }}'
throttle: 1
when: 'task_fsid'

- name: Generate client.admin keyring
delegate_to: 127.0.0.1
shell:
cmd: ceph-authtool --create-keyring {{ task_client_admin_keyring }} --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
creates: '{{ task_client_admin_keyring }}'
throttle: 1
notify: Add key to client.admin keyring
when: 'task_fsid'

- name: Generate bootstrap-osd keyring
delegate_to: 127.0.0.1
shell:
cmd: ceph-authtool --create-keyring {{ task_bootstrap_osd_keyring }} --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
creates: '{{ task_bootstrap_osd_keyring }}'
throttle: 1
notify: Add key to bootstrap-osd keyring
when: 'task_fsid'

- name: Generate mon map
delegate_to: 127.0.0.1
shell:
cmd: monmaptool --create{% if task_release_majors[task_release] | default(None) %} --set-min-mon-release={{ task_release_majors[task_release] }}{% endif %} --fsid {{ task_fsid }} {{ task_mon_map }}
creates: '{{ task_mon_map }}'
throttle: 1
notify: Add nodes to mon map
when: 'task_fsid'

handlers:
- name: Add key to client.admin keyring
delegate_to: 127.0.0.1
shell:
cmd: ceph-authtool {{ task_mon_keyring }} --import-keyring {{ task_client_admin_keyring }}

- name: Add key to bootstrap-osd keyring
delegate_to: 127.0.0.1
shell:
cmd: ceph-authtool {{ task_mon_keyring }} --import-keyring {{ task_bootstrap_osd_keyring }}

- name: Add nodes to mon map
delegate_to: 127.0.0.1
shell:
cmd: monmaptool --add {{ item.name }} {{ item.ip }} {{ task_mon_map }}
loop: "{{ lookup('template', '../files/ceph/ceph.monitors.tpl') | from_yaml | default([]) }}"

- name: Ceph - Add package repository
hosts: all
order: shuffle
Expand Down Expand Up @@ -191,6 +112,85 @@
state: present
when: '"rgw" in task_roles'

- name: Ceph - Generate cluster keys and maps
hosts: all
order: shuffle
gather_facts: yes
gather_subset:
- "default_ipv4"
- "default_ipv6"
vars:
task_fsid: "{{ ceph_fsid | default('') }}"
task_bootstrap_osd_keyring: ../data/ceph/cluster.{{ task_fsid }}.bootstrap-osd.keyring
task_client_admin_keyring: ../data/ceph/cluster.{{ task_fsid }}.client.admin.keyring
task_mon_keyring: ../data/ceph/cluster.{{ task_fsid }}.mon.keyring
task_mon_map: ../data/ceph/cluster.{{ task_fsid }}.mon.map
task_release: "{{ ceph_release | default('squid') }}"
task_roles: "{{ ceph_roles | default([]) }}"

task_release_majors:
luminous: 12
mimic: 13
nautilus: 14
octopus: 15
pacific: 16
quincy: 17
reef: 18
squid: 19
any_errors_fatal: true
tasks:
- name: Generate mon keyring
delegate_to: 127.0.0.1
shell:
cmd: ceph-authtool --create-keyring {{ task_mon_keyring }} --gen-key -n mon. --cap mon 'allow *'
creates: '{{ task_mon_keyring }}'
throttle: 1
when: 'task_fsid'

- name: Generate client.admin keyring
delegate_to: 127.0.0.1
shell:
cmd: ceph-authtool --create-keyring {{ task_client_admin_keyring }} --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
creates: '{{ task_client_admin_keyring }}'
throttle: 1
notify: Add key to client.admin keyring
when: 'task_fsid'

- name: Generate bootstrap-osd keyring
delegate_to: 127.0.0.1
shell:
cmd: ceph-authtool --create-keyring {{ task_bootstrap_osd_keyring }} --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
creates: '{{ task_bootstrap_osd_keyring }}'
throttle: 1
notify: Add key to bootstrap-osd keyring
when: 'task_fsid'

- name: Generate mon map
delegate_to: 127.0.0.1
shell:
cmd: monmaptool --create{% if task_release_majors[task_release] | default(None) %} --set-min-mon-release={{ task_release_majors[task_release] }}{% endif %} --fsid {{ task_fsid }} {{ task_mon_map }}
creates: '{{ task_mon_map }}'
throttle: 1
notify: Add nodes to mon map
when: 'task_fsid'

handlers:
- name: Add key to client.admin keyring
delegate_to: 127.0.0.1
shell:
cmd: ceph-authtool {{ task_mon_keyring }} --import-keyring {{ task_client_admin_keyring }}

- name: Add key to bootstrap-osd keyring
delegate_to: 127.0.0.1
shell:
cmd: ceph-authtool {{ task_mon_keyring }} --import-keyring {{ task_bootstrap_osd_keyring }}

- name: Add nodes to mon map
delegate_to: 127.0.0.1
shell:
cmd: monmaptool --add {{ item.name }} {{ item.ip }} {{ task_mon_map }}
loop: "{{ lookup('template', '../files/ceph/ceph.monitors.tpl') | from_yaml | default([]) }}"

- name: Ceph - Set up config and keyrings
hosts: all
order: shuffle
Expand Down
2 changes: 1 addition & 1 deletion ansible/files/ceph/ceph.monitors.tpl
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{% for host in vars['ansible_play_hosts'] %}
{% for host in groups['all'] %}
{% if hostvars[host]['ceph_fsid'] == task_fsid and "mon" in hostvars[host]['ceph_roles'] %}
- name: "{{ host }}"
ip: "{{ hostvars[host]['ceph_ip_address'] | default(hostvars[host]['ansible_default_ipv6']['address'] | default(hostvars[host]['ansible_default_ipv4']['address'])) }}"
Expand Down
3 changes: 3 additions & 0 deletions terraform/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,6 @@
.terraform.lock.hcl
terraform.tfstate
terraform.tfstate.backup

*.tfvars
!*.auto.tfvars
6 changes: 3 additions & 3 deletions terraform/baremetal-incus/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@ resource "incus_network" "this" {
description = "Network used to test incus-deploy (OVN uplink)"

config = {
"ipv4.address" = "172.31.254.1/24"
"ipv4.address" = var.ovn_uplink_ipv4_address
"ipv4.nat" = "true"
"ipv6.address" = "fd00:1e4d:637d:1234::1/64"
"ipv6.address" = var.ovn_uplink_ipv6_address
"ipv6.nat" = "true"
}
}
Expand Down Expand Up @@ -49,7 +49,7 @@ resource "incus_profile" "this" {
name = "eth0"

properties = {
"network" = "incusbr0"
"network" = var.network
"name" = "eth0"
}
}
Expand Down
14 changes: 14 additions & 0 deletions terraform/baremetal-incus/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -17,3 +17,17 @@ variable "memory" {
variable "storage_pool" {
type = string
}

variable "network" {
type = string
}

variable "ovn_uplink_ipv4_address" {
type = string
default = ""
}

variable "ovn_uplink_ipv6_address" {
type = string
default = ""
}
11 changes: 9 additions & 2 deletions terraform/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,12 @@ module "baremetal" {
instance_names = ["server01", "server02", "server03", "server04", "server05"]
image = "images:ubuntu/22.04"
memory = "4GiB"
storage_pool = "default"

storage_pool = var.incus_storage_pool
network = var.incus_network

ovn_uplink_ipv4_address = var.ovn_uplink_ipv4_address
ovn_uplink_ipv6_address = var.ovn_uplink_ipv6_address
}

module "services" {
Expand All @@ -14,5 +19,7 @@ module "services" {
project_name = "dev-incus-deploy-services"
instance_names = ["ceph-mds01", "ceph-mds02", "ceph-mds03", "ceph-mgr01", "ceph-mgr02", "ceph-mgr03", "ceph-rgw01", "ceph-rgw02", "ceph-rgw03"]
image = "images:ubuntu/24.04"
storage_pool = "default"

storage_pool = var.incus_storage_pool
network = var.incus_network
}
6 changes: 6 additions & 0 deletions terraform/provider_incus.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
provider "incus" {
remote {
name = var.incus_remote
default = true
}
}
2 changes: 1 addition & 1 deletion terraform/services/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ resource "incus_profile" "this" {
name = "eth0"

properties = {
"network" = "incusbr0"
"network" = var.network
"name" = "eth0"
}
}
Expand Down
5 changes: 5 additions & 0 deletions terraform/services/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -13,3 +13,8 @@ variable "image" {
variable "storage_pool" {
type = string
}

variable "network" {
type = string
default = ""
}
Loading