Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test deployment locally with tmt (from packit branch) #581

Merged
merged 12 commits into from
Sep 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .ansible-lint
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ mock_modules:
# Ansible 2.9.27 in F35 still contains the k8s module so we can ignore the error until F36,
# where we can switch to kubernetes.core.k8s as ansible-5.x in F36 contains it.
- k8s
- kubernetes.core.k8s
# Ignore until F36, where these are in community.crypto collection (part of ansible-5.x rpm).
- openssh_keypair
- openssl_certificate
Expand Down
1 change: 1 addition & 0 deletions .fmf/version
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
1
25 changes: 25 additions & 0 deletions .github/workflows/tf-tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
name: Schedule tests on Testing Farm
on:
pull_request:

# The concurrency key is used to prevent multiple workflows from running at the same time
majamassarini marked this conversation as resolved.
Show resolved Hide resolved
concurrency:
group: my-concurrency-group
cancel-in-progress: true

jobs:
tests:
runs-on: ubuntu-latest
steps:
- name: Schedule tests on Testing Farm
uses: sclorg/testing-farm-as-github-action@v2
with:
compose: CentOS-Stream-9
api_key: ${{ secrets.TF_API_KEY }}
git_url: "https://github.com/packit/deployment"
git_ref: "tf-openshift-tests"
majamassarini marked this conversation as resolved.
Show resolved Hide resolved
tmt_plan_regex: "deployment/remote"
tmt_hardware: '{"memory": ">= 13 GiB", "disk": [{"size": ">= 100 GB"}], "cpu": {"cores": ">= 6"}, "virtualization": {"is-supported": true}}'
pull_request_status_name: "Deployment"
timeout: 3600
secrets: CRC_PULL_SECRET=${{ secrets.CRC_PULL_SECRET }}
2 changes: 1 addition & 1 deletion .zuul.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
check:
jobs:
- pre-commit
- deployment-tests
# - deployment-tests
gate:
jobs:
- pre-commit
Expand Down
43 changes: 43 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,14 @@ AP := ansible-playbook -vv -c local -i localhost, -e ansible_python_interpreter=
# https://docs.ansible.com/ansible/latest/user_guide/playbooks_tags.html#special-tags
TAGS ?= all

CRC_PULL_SECRET ?= "$(shell cat secrets/openshift-local-pull-secret.yml)"

ifneq "$(shell whoami)" "root"
ASK_PASS ?= --ask-become-pass
endif

# Only for Packit team members with access to Bitwarden vault
# if not working prepend OPENSSL_CONF=/dev/null to script invocation
download-secrets:
./scripts/download_secrets.sh

Expand Down Expand Up @@ -50,3 +53,43 @@ check:
move-stable:
[[ -d move_stable_repositories ]] || scripts/move_stable.py init
scripts/move_stable.py move-all

# To be run inside VM where the oc cluster is running!
# `cd /vagrant; SHARED_DIR=/vagrant make test-deploy` for using it inside the vagrant VM.
# `SHARED_DIR=/home/tmt/deployment make test-deploy` for using it inside the tmt VM.
# SHARED_DIR could be /vagrant or /home/tmt/deployment, it depends on the VM where tmt is being run
# look inside deployment.fmf to find out the value of SHARED_DIR set through tmt
test-deploy:
DEPLOYMENT=dev $(AP) playbooks/generate-local-secrets.yml
DEPLOYMENT=dev $(AP) -e '{"user": $(USER), "src_dir": $(SHARED_DIR)}' playbooks/test_deploy_setup.yml
cd $(SHARED_DIR); DEPLOYMENT=dev $(AP) -e '{"container_engine": "podman", "registry": "default-route-openshift-image-registry.apps-crc.testing", "registry_user": "kubeadmin", "user": $(USER), "src_dir": $(SHARED_DIR)}' playbooks/test_deploy.yml

# Openshift Local pull_secret must exist locally
# or you can also define the CRC_PULL_SECRET var
check-pull-secret:
if [ ! -f secrets/openshift-local-pull-secret.yml ] && [ ! -n "$(CRC_PULL_SECRET)" ]; then echo "no pull secret available create secrets/openshift-local-pull-secret.yml file or set CRC_PULL_SECRET variable"; exit 1; else echo "pull secret found"; fi

# Execute tmt deployment test on a local virtual machine provisioned by tmt
#
# tmt local provisioned virtual machine have by default 2 cpu cores
# you need to change tmt defaults to be able to run this test locally
# change DEFAULT_CPU_COUNT in tmt/steps/provision/testcloud.py to 6
#
# For running this same test remotely, using testing farm, we need the
# github action, there are no other ways (at the moment) to deal with
# the secrets (in our case the pull_request Openshift Local secret).
# For this reason the deployment/remote plan is not called by this file
# instead it is called from the testing farm github action
#
# Useful tmt/virsh commands to debug this test are listed below
# tmt run --id deployment --until execute
# tmt run --id deployment prepare --force
# tmt run --id deployment login --step prepare:start
# tmt run --id deployment execute --force
# tmt run --id deployment login --step execute:start
# tmt run --id deployment finish
# tmt clean runs
# tmt clean guests
# virsh list --all
tmt-local-test: check-pull-secret
tmt run --id deployment plans --name deployment/local
45 changes: 45 additions & 0 deletions containers/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
ANSIBLE_PYTHON ?= $(shell command -v /usr/bin/python3 2> /dev/null || echo /usr/bin/python2)
AP := ansible-playbook -vv -c local -i localhost, -e ansible_python_interpreter=$(ANSIBLE_PYTHON)
VAGRANT_SSH_PORT = "$(shell vagrant ssh-config | awk '/Port/{print $$2}')"
VAGRANT_SSH_USER = "$(shell vagrant ssh-config | awk '/User/{print $$2}')"
VAGRANT_SSH_GUEST = "$(shell vagrant ssh-config | awk '/HostName/{print $$2}')"
VAGRANT_SSH_IDENTITY_FILE = "$(shell vagrant ssh-config | awk '/IdentityFile/{print $$2}')"
VAGRANT_SSH_CONFIG = $(shell vagrant ssh-config | awk 'NR>1 {print " -o "$$1"="$$2}')
VAGRANT_SHARED_DIR = "/vagrant"

# to be used when the vagrant box link is broken, should be kept in sync with the Vagrant file
#CENTOS_VAGRANT_BOX = CentOS-Stream-Vagrant-8-latest.x86_64.vagrant-libvirt.box
#CENTOS_VAGRANT_URL = https://cloud.centos.org/centos/8-stream/x86_64/images/$(CENTOS_VAGRANT_BOX)

CRC_PULL_SECRET ?= "$(shell cat secrets/openshift-local-pull-secret.yml)"

# for this command to work, you may need to:
# sudo systemctl enable --now libvirtd
# sudo systemctl enable --now virtnetworkd
oc-cluster-create:
if [ ! -z "$(CENTOS_VAGRANT_BOX)" ] && [ -f $(CENTOS_VAGRANT_BOX) ]; then wget $(CENTOS_VAGRANT_URL); fi;
vagrant up

oc-cluster-destroy:
vagrant destroy

oc-cluster-up:
vagrant up
vagrant ssh -c "cd $(VAGRANT_SHARED_DIR) && $(AP) --extra-vars user=vagrant playbooks/oc-cluster-run.yml"

oc-cluster-down:
vagrant halt

oc-cluster-ssh: oc-cluster-up
ssh $(VAGRANT_SSH_CONFIG) localhost

# Openshift Local pull_secret must exist locally
# or you can also define the CRC_PULL_SECRET var
check-pull-secret:
if [ ! -f ../secrets/openshift-local-pull-secret.yml ] && [ ! -n "$(CRC_PULL_SECRET)" ]; then echo "no pull secret available create secrets/openshift-local-pull-secret.yml file or set CRC_PULL_SECRET variable"; exit 1; else echo "pull secret found"; fi

# Execute tmt deployment test on a vagrant virtual machine
# The virtual machine has to be already up and running,
# use the target oc-cluster-up
tmt-vagrant-test: check-pull-secret
tmt run --all provision --how connect --user vagrant --guest $(VAGRANT_SSH_GUEST) --port $(VAGRANT_SSH_PORT) --key $(VAGRANT_SSH_IDENTITY_FILE) plan --name deployment/vagrant
95 changes: 95 additions & 0 deletions containers/Vagrantfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "centos/stream9"
config.vm.box_url = "https://cloud.centos.org/centos/9-stream/x86_64/images/CentOS-Stream-Vagrant-9-latest.x86_64.vagrant-libvirt.box"
#config.vm.box_url = "file:///$VagrantProjectHome/../CentOS-Stream-Vagrant-8-latest.x86_64.vagrant-libvirt.box"


# Forward traffic on the host to the development server on the guest
config.vm.network "forwarded_port", guest: 5000, host: 5000
# Forward traffic on the host to Redis on the guest
config.vm.network "forwarded_port", guest: 6379, host: 6379
# Forward traffic on the host to the SSE server on the guest
config.vm.network "forwarded_port", guest: 8080, host: 8080


if Vagrant.has_plugin?("vagrant-hostmanager")
config.hostmanager.enabled = true
config.hostmanager.manage_host = true
end

# Vagrant can share the source directory using rsync, NFS, or SSHFS (with the vagrant-sshfs
# plugin). By default it rsyncs the current working directory to /vagrant.
#
# If you would prefer to use NFS to share the directory uncomment this and configure NFS
# config.vm.synced_folder ".", "/vagrant", type: "nfs", nfs_version: 4, nfs_udp: false
config.vm.synced_folder "..", "/vagrant"
# config.vm.synced_folder ".", "/vagrant", disabled: true
# config.vm.synced_folder ".", "/srv/pagure",
# ssh_opts_append: "-o IdentitiesOnly=yes",
# type: "sshfs"

# To cache update packages (which is helpful if frequently doing `vagrant destroy && vagrant up`)
# you can create a local directory and share it to the guest's DNF cache. The directory needs to
# exist, so create it before you uncomment the line below.
#Dir.mkdir('.dnf-cache') unless File.exists?('.dnf-cache')
#config.vm.synced_folder ".dnf-cache", "/var/cache/dnf",
# type: "sshfs",
# sshfs_opts_append: "-o nonempty"

# Comment this line if you would like to disable the automatic update during provisioning
# config.vm.provision "shell", inline: "sudo dnf -y --disablerepo '*' --enablerepo=extras swap centos-linux-repos centos-stream-repos"

# !!!!!!! resize disk image !!!!!!!!!
config.vm.provision "shell", inline: "sudo dnf install -y cloud-utils-growpart"
config.vm.provision "shell", inline: "sudo growpart /dev/vda 1"
config.vm.provision "shell", inline: "sudo resize2fs /dev/vda1"
# config.vm.provision "shell", inline: "sudo xfs_growfs /dev/vda1" # this was for CentOS Stream 8

# bootstrap and run with ansible
config.vm.provision "ansible" do |ansible|
# ansible.verbose = "-vvv"
ansible.verbose = true
ansible.playbook = "../playbooks/oc-cluster-setup.yml"
ansible.extra_vars = {"user": "vagrant"}
end
config.vm.provision "ansible" do |ansible|
# ansible.verbose = "-vvv"
ansible.verbose = true
ansible.playbook = "../playbooks/oc-cluster-run.yml"
ansible.raw_arguments = ['--extra-vars', 'user=vagrant', '--extra-vars', '@../secrets/openshift-local-pull-secret.yml']
end
config.vm.provision "ansible" do |ansible|
# ansible.verbose = "-vvv"
ansible.become = true
ansible.become_user = "root"
ansible.verbose = true
ansible.playbook = "../playbooks/oc-cluster-tests-setup.yml"
end

# Create the box
config.vm.define "packit-oc-cluster" do |oc|
oc.vm.host_name = "packit-oc-cluster.example.com"

oc.vm.provider :libvirt do |domain|
# Season to taste
domain.cpus = 6
domain.graphics_type = "spice"
domain.memory = 14336
domain.video_type = "qxl"
domain.machine_virtual_size = 100

# Uncomment the following line if you would like to enable libvirt's unsafe cache
# mode. It is called unsafe for a reason, as it causes the virtual host to ignore all
# fsync() calls from the guest. Only do this if you are comfortable with the possibility of
# your development guest becoming corrupted (in which case you should only need to do a
# vagrant destroy and vagrant up to get a new one).
#
# domain.volume_cache = "unsafe"
end
end
end
83 changes: 83 additions & 0 deletions docs/deployment/testing-changes.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,3 +68,86 @@ This repository provides helpful playbook to do this with one command:

Zuul provides a public key for every project. The ansible playbook downloads Zuul repository and pass the project tenant and name as parameters to encryption script. This script then encrypts files with public key of the project.
For more information please refer to [official docs](https://ansible.softwarefactory-project.io/docs/user/zuul_user.html#create-a-secret-to-be-used-in-jobs).

### Test Deployment locally with OpenShift Local

For using OpenShift Local you need a _pull secret_, download it here: https://console.redhat.com/openshift/create/local. Save it in a file called `secrets/openshift-local-pull-secret.yml` following this format:

```
---
pull_secret: <<< DOWNLOADED PULL SECRET CONTENT >>>
```

[Populate the `secrets` dir (`secrets/{SERVICE}/dev/`) with the other secrets.](secrets#running-a-servicebot-locally)

You can choose if you want to use a Virtual Machine created by Vagrant or one created by tmt.

Calling a test multiple times, modifyng and debugging it is simpler in a Vagrant VM.

The tmt environment ensure a more reproducible test.

#### Using Vagrant

Create and start the OpenShift Local cluster in a Vagrant VM with (it takes as long as an hour in my X1 ThinkPad):

```
cd containers; make oc-cluster-create
```

Once OC is up and running you can test the `packit-service` deployment with the command:

```
cd containers; make tmt-vagrant-test
```

This command will connect tmt to the Vagrant virtual machine and run the deploy test there (`make test-deploy`).
You can run the test as many times as you want as long as the virtual machine is up and running and the `crc cluster` is started (`make oc-cluster-up` after every `make oc-cluster-down`).
You can skip the `tmt` environment and run the test directly inside the VM:

```
cd containers;
make oc-cluster-ssh
```

Inside the Vagrant VM as vagrant user you do:

```
cd /vagrant
SHARED_DIR=/vagrant make test-deploy
```

You can directly work on the cluster:

```
oc login -u kubeadmin https://api.crc.testing:6443
oc project myproject
oc describe node
oc describe pods
oc describe pod packit-worker-0
...
```

You can destroy the `libvirt` machine with `cd containers; make oc-cluster-destroy` and re-create it again with `cd containers; make oc-cluster-create`.

#### Using tmt

You can test the packit-service deployment using a tmt created local VM with the command:

```
make tmt-local-test
```

It is quite hard to change a test inside a tmt created VM and debug it.
But, in case you need it this is a list of commands that can be handy:

```
tmt run --id deployment --until execute
tmt run --id deployment prepare --force
tmt run --id deployment login --step prepare:start
tmt run --id deployment execute --force
tmt run --id deployment login --step execute:start
tmt run --id deployment finish
tmt clean runs
tmt clean guests
virsh list --all
```
2 changes: 1 addition & 1 deletion openshift/redis.yml.j2
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ spec:
spec:
containers:
- name: redis
image: quay.io/sclorg/redis-7-c9s
image: quay.io/sclorg/redis-7-c9s:c9s
majamassarini marked this conversation as resolved.
Show resolved Hide resolved
ports:
- containerPort: 6379
volumeMounts:
Expand Down
Loading
Loading