diff --git a/README.md b/README.md index d3d070a..35d2ab3 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,7 @@ Playbooks are located in the `playbooks/` directory. | `virt-host-setup.yml` | `./inventory/virthost/` | Provision a virtual machine host | | `kube-install.yml` | `./inventory/vms.local.generated` | Install and configure a k8s cluster | | `kube-teardown.yml` | `./inventory/vms.local.generated` | Runs `kubeadm reset` on all nodes to tear down k8s | -| `vm-teardown.yml` | `./inventory/virthost/` | Destroys VMs on the virtual machine host | +| `vm-teardown.yml` | `./inventory/virthost/` | Destroys & removes VMs on the virtual machine host | | `multus-cni.yml` | `./inventory/vms.local.generated` | Compiles [multus-cni](https://github.com/Intel-Corp/multus-cni) | | `gluster-install.yml` | `./inventory/vms.local.generated` | Install a GlusterFS cluster across VMs (requires vm-attach-disk) | | `fedora-python-bootstrapper.yml` | `./inventory/vms.local.generated` | Bootstrapping Python dependencies on cloud images | @@ -195,6 +195,24 @@ kube-node-3 Ready 9m v1.8.3 Everything should be marked as ready. If so, you're good to go! +## Creating a bootstrapped image + +Should you need to spin up multiple clusters or otherwise spin up a bunch of VMs for a cluster, it may behoove you to "bootstrap" your VM images so that you don't have to download the dependencies many times. You can create a sort of golden image to use by using the `./playbooks/create-bootstrapped-image.yml` playbook. + +You can run it for example like so: + +``` +$ ansible-playbook -i inventory/virthost.inventory \ + -e "@./inventory/examples/image-bootstrap/extravars.yml" \ + playbooks/create-bootstrapped-image.yml +``` + +This will result in an image being created @ `/home/images/bootstrapped.qcow2` (by default, this can be altered otherwise). You can then specify this image to use when creating a cluster. + +For example... + + + # About Initially inspired by: diff --git a/contrib/multi-cluster/README.md b/contrib/multi-cluster/README.md new file mode 100644 index 0000000..0ee8194 --- /dev/null +++ b/contrib/multi-cluster/README.md @@ -0,0 +1,207 @@ +# Multi-cluster creator! + +A series of scripts designed to spin up multiple clusters at once. Originally designed for a tutorial / classroom setup where you're spinning up a cluster for each attendee to use. + +These scripts are designed to be run from the root directory of this clone. + +## Prerequisites + +* A physical machine with CentOS 7 + - We call this machine "the virthost", it hosts your virtual machines +* On your client machine... + - A clone of this repo + - SSH keys to that physical machine that allow you to login as root (without a passowrd is convenient.) + - Ansible. Tested with version 2.5.7 + +## General process + +In overview, what we're going to do is: + +* Setup the virtualization host ("virthost") +* Create a "bootstrap image" (a golden image from which VMs are created) +* Run the multi-cluster spin-up scripts. + +## Downloading Ansible Galaxy roles + +If this is your first time cloning this repository, go ahead and initialize the requirements for Ansible Galaxy with: + +``` +ansible-galaxy install -r requirements.yml +``` + +## Creating an inventory for your virthost + +We call the box we run the virtual machines on "the virthost" generally. Let's create an inventory for it. + +**NOTE**: You'll need to update the IP address to the proper one for your virthost. You can also change the name from `droctagon2` to any name you wish. + +``` +export VIRTHOST_IP=192.168.1.55 +cat << EOF > ./inventory/virthost.inventory +droctagon2 ansible_host=$VIRTHOST_IP ansible_ssh_user=root + +[virthost] +droctagon2 +EOF +``` + +## Setting up the virt-host + +You'll first need to run a playbook to setup the virt host. This has the side-effect of also spinning up some VMs -- which we don't need yet. So you'll do this first, and then we'll use those VMs to test we can access them and then we'll remove those VMs. + +``` +ansible-playbook -i inventory/virthost.inventory -e 'ssh_proxy_enabled=true' playbooks/virthost-setup.yml +``` + +This will result in a locally generated inventory with the VMs that were spun up: + +``` +cat inventory/vms.local.generated +``` + +Now we can use information from that in order to access those machines -- a key has been created for us too in `/home/{your user name}/.ssh/{virthost name}/id_vm_rsa` + +So for example I can SSH to a VM using: + +``` +ssh -i /home/doug/.ssh/droctagon2/id_vm_rsa -o ProxyCommand="ssh -W %h:%p root@192.168.1.55" centos@192.168.122.68 +``` + +Where: + +* `/home/doug/.ssh/droctagon2/id_vm_rsa` is the name of the key at the bottom of the `./inventory/vms.local.generated` +* `192.168.1.55` is the IP address of my virtualization host +* `192.168.122.68` is the IP address of the VM from the top section of the `./inventory/vms.local.generated` + +Now you can remove those VMs (and I recommend you do) with: + +``` +ansible-playbook -i inventory/virthost.inventory playbooks/vm-teardown.yml +``` + +## OPTION: Download the bootstrap image + +Go ahead and place this image on your virtualization host, that is, SSH to the virt host + +``` +curl http://speedmodeling.org/kube/bootstrapped.qcow2 -o /home/images/bootstrapped.qcow2 +``` + +## Creating the bootstrap image. + +You can skip this if you downloaded an existing one. + +You can run it for example like so: + +``` +$ ansible-playbook -i inventory/virthost.inventory \ + -e "@./inventory/examples/image-bootstrap/extravars.yml" \ + playbooks/create-bootstrapped-image.yml +``` + + +## Run the multi-cluster spin up all at once... + +These scripts expect your virthost inventory to live @ `./inventory/virthost.inventory`. + +It might be convenient to set the number of clusters like so: + +``` +export CLUSTERS=3 +``` + +"Run it all" with the all.sh script which runs all the individual plays. + +``` +./contrib/multi-cluster/all.sh $CLUSTERS +``` + +After you've set it up, you'll find the information to log into the clusters in your inventory directory... + +``` +cat inventory/multi-cluster/cluster-1.inventory +``` + +Replace `1` with whatever cluster number. So if you had `CLUSTERS=3` you should have `cluster-1.inventory` through `cluster-3.inventory` + +You can then use the IP addresses as listed in these inventories to SSH to each of the hosts. The same SSH key as used earlier is still the key you'll use, and should be listed in each of the inventories. + +When this completes, you should now have a number of clusters. Let's take a look at the first cluster. + +``` +ssh -i /home/doug/.ssh/droctagon2/id_vm_rsa -o ProxyCommand="ssh -W %h:%p root@192.168.1.55" centos@$(cat inventory/multi-cluster/cluster-1.inventory | grep kube-master-1 | head -n1 | cut -d= -f2) +``` + +Replace the SSH key with your own, as well as the `root@192.168.1.55` with the IP address of your virthost. + +Now, after SSHing to that machine -- you should be able to see: + +``` +[centos@kube-master-1 ~]$ kubectl get nodes +NAME STATUS ROLES AGE VERSION +kube-master-1 NotReady master 1h v1.11.2 +kube-node-2 NotReady 1h v1.11.2 +kube-node-3 NotReady 1h v1.11.2 +``` + +Note that the `NotReady` state is expected, as this cluster is up, however, it is intentionally not ready because the attendees are expected to install the CNI plugins. + +You can then tear down those VMs if you please: + +``` +./contrib/multi-cluster/multi-teardown.sh $CLUSTERS +``` + + +## Giving access via SSH to people + +Firstly, you must set the `CLUSTERS` environment variable for this to work. Requires a Perl install on the machine you're running it from. + +``` +export CLUSTERS=3 +./contrib/multi-cluster/tmate.pl +``` + +This will create 2 tmate sessions for each master machine. (One for a backup in case the user types 'exit', which will ruin that session) + +The output will give you a JSON structure, you're looking for the line that looks like: + +``` + "link": "https://markdownshare.com/view/ea8571af-8c97-469a-935b-470f33476214", +``` + +This will be a link to the posted markdown showing the tmate SSH urls. + +## Adding additional interfaces + +In case you have to do it manually... + +``` +virsh list --all | grep node | awk '{print $2}' | xargs -L1 -i virsh attach-interface --domain {} --type bridge --model virtio --source virbr0 --config --live +``` + +## Multi-cluster a la carte -- step-by-step if you please. + +Run it with the number of clusters you're going to create. + +``` +./contrib/multi-cluster/extravars-creator.sh $CLUSTERS +``` + +Then you can run the multi spinup... + +``` +./contrib/multi-cluster/multi-spinup.sh $CLUSTERS +``` + +Bring up the kube clusters with a multi init... + +``` +./contrib/multi-cluster/multi-init.sh $CLUSTERS +``` + +And tear 'em down with the multi-teardown... + +``` +./contrib/multi-cluster/multi-teardown.sh $CLUSTERS +``` diff --git a/contrib/multi-cluster/all.sh b/contrib/multi-cluster/all.sh new file mode 100755 index 0000000..3e2675f --- /dev/null +++ b/contrib/multi-cluster/all.sh @@ -0,0 +1,5 @@ +#!/bin/bash +./contrib/multi-cluster/extravars-creator.sh $1 +./contrib/multi-cluster/multi-spinup.sh $1 +sleep 15 +./contrib/multi-cluster/multi-init.sh $1 diff --git a/contrib/multi-cluster/extravars-creator.sh b/contrib/multi-cluster/extravars-creator.sh new file mode 100755 index 0000000..1c538e1 --- /dev/null +++ b/contrib/multi-cluster/extravars-creator.sh @@ -0,0 +1,68 @@ +#!/bin/bash + +# Usage: ./contrib/multi-cluster/extravars-creator.sh $number_of_clusters + +# Alright what do we need... +# 1. We need to generate inventories.. + +echo "Warning: You're about to delete the existing extravars files!" +# sleep 2 + +rm -Rf ./inventory/multi-cluster +mkdir -p ./inventory/multi-cluster + +masternumber=-2 +ip_master=47 + +for (( c=1; c<=$1; c++ )) +do + filename="./inventory/multi-cluster/cluster-$c.yml" + echo "Creating extravars file $filename" + # Increment the node numbers. + masternumber=$(($masternumber+3)) + firstnodenumber=$(($masternumber+1)) + secondnodenumber=$(($masternumber+2)) + ip_master=$(($ip_master+3)) + ip_first=$(($ip_master+1)) + ip_second=$(($ip_master+2)) + # Create the extra vars we need. + cat < $filename +kubeadm_version: v1.11.2 +hugepages_enabled: true +image_destination_name: bootstrapped.qcow2 +spare_disk_attach: false +pod_network_type: "none" +enable_compute_device: true +customize_kube_config: true +network_type: "extra_interface" +system_network: 192.168.122.0 +system_netmask: 255.255.255.0 +system_broadcast: 192.168.122.255 +system_gateway: 192.168.122.1 +system_nameservers: 192.168.122.1 +system_dns_search: example.com +# ignore_preflight_version: true +# bridge_networking: true +# bridge_name: br0 +# bridge_physical_nic: "enp1s0f1" +# bridge_network_name: "br0" +# bridge_network_cidr: 192.168.1.0/24 +virtual_machines: + - name: kube-master-$masternumber + node_type: master + system_ram_mb: 4096 + system_cpus: 1 + static_ip: 192.168.122.$ip_master + - name: kube-node-$firstnodenumber + node_type: nodes + system_ram_mb: 4096 + system_cpus: 1 + static_ip: 192.168.122.$ip_first +# - name: kube-node-$secondnodenumber +# node_type: nodes +# system_ram_mb: 4096 +# system_cpus: 1 +# static_ip: 192.168.122.$ip_second +enable_userspace_cni: true +EOF +done diff --git a/contrib/multi-cluster/multi-init.sh b/contrib/multi-cluster/multi-init.sh new file mode 100755 index 0000000..0948a13 --- /dev/null +++ b/contrib/multi-cluster/multi-init.sh @@ -0,0 +1,12 @@ +#!/bin/bash + +# First argument is number of clusters. See README.md for more details. + +for (( c=1; c<=$1; c++ )) +do + extravars="./inventory/multi-cluster/cluster-$c.yml" + inventory="./inventory/multi-cluster/cluster-$c.inventory" + cmd="ansible-playbook -i \"$inventory\" -e \"@$extravars\" playbooks/kube-init.yml" + echo Running: $cmd + eval $cmd +done \ No newline at end of file diff --git a/contrib/multi-cluster/multi-spinup.sh b/contrib/multi-cluster/multi-spinup.sh new file mode 100755 index 0000000..f91141b --- /dev/null +++ b/contrib/multi-cluster/multi-spinup.sh @@ -0,0 +1,13 @@ +#!/bin/bash + +# First argument is number of clusters. See README.md for more details. + +for (( c=1; c<=$1; c++ )) +do + filename="./inventory/multi-cluster/cluster-$c.yml" + cmd="ansible-playbook -i inventory/virthost.inventory -e 'ssh_proxy_enabled=true' -e 'attach_additional_virtio_device=true' -e \"@$filename\" playbooks/virthost-setup.yml" + echo Running: $cmd + eval $cmd + mv inventory/vms.local.generated ./inventory/multi-cluster/cluster-$c.inventory + echo "New inventory @ ./inventory/multi-cluster/cluster-$c.inventory" +done \ No newline at end of file diff --git a/contrib/multi-cluster/multi-teardown.sh b/contrib/multi-cluster/multi-teardown.sh new file mode 100755 index 0000000..a442e2e --- /dev/null +++ b/contrib/multi-cluster/multi-teardown.sh @@ -0,0 +1,11 @@ +#!/bin/bash + +# First argument is number of clusters. See README.md for more details. + +for (( c=1; c<=$1; c++ )) +do + extravars="./inventory/multi-cluster/cluster-$c.yml" + cmd="ansible-playbook -i inventory/virthost.inventory -e \"@$extravars\" playbooks/vm-teardown.yml" + echo Running: $cmd + eval $cmd +done \ No newline at end of file diff --git a/contrib/multi-cluster/rebuild_inventory.sh b/contrib/multi-cluster/rebuild_inventory.sh new file mode 100755 index 0000000..7c523bd --- /dev/null +++ b/contrib/multi-cluster/rebuild_inventory.sh @@ -0,0 +1,39 @@ +#!/bin/bash + +# ---------------------------------------- +# -- WORK IN PROGRESS +# attempt at rebuilding inventory +# after rebooted virthost. +# ---------------------------------------- + +virthost_ip=$(cat inventory/virthost.inventory | grep ansible_host | awk '{ print $2 }' | cut -d= -f2) + +VM=kube-master-1 + +cat <<'EOF' > /tmp/shell.txt +arp -an | grep "`virsh dumpxml THE_VIRTUAL_MACHINE | grep "mac address" | sed "s/.*'\(.*\)'.*/\1/g"`" | awk '{ gsub(/[\(\)]/,"",$2); print $2 }' +EOF + +sed -i -e "s/THE_VIRTUAL_MACHINE/$VM/" /tmp/shell.txt + +MYCOMMAND=$(base64 -w0 /tmp/shell.txt) +echo $MYCOMMAND | base64 -d + +# ssh user@remotehost "echo $MYCOMMAND | base64 -d | bash" + +# ssh root@$virthost_ip "arp -an | grep \"`virsh dumpxml $VM | grep \"mac address\" | sed \"s/.*'\(.*\)'.*/\1/g\"`\" | awk '{ gsub(/[\(\)]/,\"\",$2); print $2 }'" + +# #!/bin/bash +# # Returns the IP address of a running KVM guest VM +# # Assumes a working KVM/libvirt environment +# # +# # Install: +# # Add this bash function to your ~/.bashrc and `source ~/.bashrc`. +# # Usage: +# # $ virt-addr vm-name +# # 192.0.2.16 +# # +# virt-addr() { +# VM="$1" +# arp -an | grep "`virsh dumpxml $VM | grep "mac address" | sed "s/.*'\(.*\)'.*/\1/g"`" | awk '{ gsub(/[\(\)]/,"",$2); print $2 }' +# } \ No newline at end of file diff --git a/contrib/multi-cluster/scratch.md b/contrib/multi-cluster/scratch.md new file mode 100644 index 0000000..3bef3d2 --- /dev/null +++ b/contrib/multi-cluster/scratch.md @@ -0,0 +1,244 @@ +# kubernetes patch process + +Unfortunately this results in a conflicting `pkg/kubectl/genericclioptions/resource/helper_test.go` + +``` +git clone https://github.com/dashpole/kubernetes.git dashpole.kubernetes +cd dashpole.kubernetes/ +git remote add upstream https://github.com/kubernetes/kubernetes.git +git fetch upstream +git checkout release-1.11 +git pull +git checkout device_id +git checkout -b rebase_deviceid +git rebase release-1.11 +``` + +This results in a lot of errors. + +``` +git diff HEAD~6 > /tmp/kube.patch +wc -l /tmp/kube.patch +git checkout release-1.11 +git apply /tmp/kube.patch +``` + +Making with a custom version... + +> @dougbtv looking at code in the scripts that build the version number, looks like you can set `KUBE_GIT_VERSION_FILE` to a file and the file can have the format and you can set it to anything you wish. (Though to be honest i haven’t tried quick-release with it) + +``` +[centos@kube-dev kubernetes]$ cat DOUG.VERSION.FILE +KUBE_GIT_COMMIT='9ff717ee9c87d5b3248a3d28b8893e21028ea42d' +KUBE_GIT_TREE_STATE='clean' +KUBE_GIT_VERSION='v1.11.2-beta.0.2333+9ff717ee9c87d5' +KUBE_GIT_MAJOR='1' +KUBE_GIT_MINOR='11+' +``` + +Then build it... + +``` +$ export KUBE_GIT_VERSION_FILE=/home/centos/kubernetes/DOUG.VERSION.FILE +$ make quick-release +``` + +``` +KUBE_GIT_VERSION_FILE=/home/centos/kubernetes/DOUG.VERSION.FILE KUBE_FASTBUILD=true make quick-release +``` + +Building the image... + +``` +KUBE_DOCKER_IMAGE_TAG=vX.Y.Z KUBE_DOCKER_REGISTRY=k8s.gcr.io KUBE_FASTBUILD=true make quick-release +``` + + +--- + +# virtdp + +make the master scheduleable. + +``` +kubectl taint node kube-master-1 node-role.kubernetes.io/master:NoSchedule- +kubectl label nodes kube-master-1 dedicated=master +``` + +``` +[centos@kube-master-1 virt-network-device-plugin]$ cat deployments/pod-virtdp.yaml +kind: Pod +apiVersion: v1 +metadata: + name: virt-device-plugin +spec: + nodeSelector: + dedicated: master + tolerations: + - key: node-role.kubernetes.io/master + operator: Equal + value: master + effect: NoSchedule + containers: + - name: virt-device-plugin + image: virt-device-plugin + imagePullPolicy: IfNotPresent + command: [ "/usr/bin/virtdp", "-logtostderr", "-v", "10" ] + # command: [ "/bin/bash", "-c", "--" ] + args: [ "while true; do sleep 300000; done;" ] + #securityContext: + #privileged: true + volumeMounts: + - mountPath: /var/lib/kubelet/device-plugins/ + name: devicesock + readOnly: false + - mountPath: /sys/class/net + name: net + readOnly: true + volumes: + - name: devicesock + hostPath: + # directory location on host + path: /var/lib/kubelet/device-plugins/ + - name: net + hostPath: + path: /sys/class/net + hostNetwork: true + hostPID: true +``` + +``` + dougbtv, I think I found the cause of why with kubeadm it was failing: could you please help to check if 1) in api-servver yaml file, --enable-admission-plugins=NodeRestriction is set; 2) in kubelet config.yaml (/var/lib/kubelet/config.yaml) ComputeDevice feature-gates is set + zshi, takinga look! + zshi, 1. where's the api server yaml? + dougbtv, /etc/kubernetes/manifests/kube-apiserver.yaml + and 2. I do not see the ComputeDevice in /var/lib/kubelet/config.yaml -- do you have an example for that? + ok, sweet in that apiserver I have `--feature-gates=ComputeDevice=true` + for 2 : http://pasteall.org/1159284 + ty! looking + for 1, check if this exist : --enable-admission-plugins=NodeRestriction + if yes, then remove NodeRestriction + ahhh yeah looking for wrong thing in there, looking, thanks + zshi, it's in there :) + --enable-admission-plugins=NodeRestriction + I'm using "LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,StorageObjectInUseProtection" for --enable-admission-plugins + although it doesn't really need those admissions, the point here is to remove NodeRestriction + cool, alright, restarting kubelet and api server, one sec :) + here is what I got when creating the pod, the computeDevice is in api-server : http://pasteall.org/1159329 +``` + +``` +[centos@kube-master-1 ~]$ cat /etc/cni/net.d/70-multus.conf +{ + "name": "multus-cni-network", + "type": "multus", + "delegates": [ + { + "type": "flannel", + "name": "flannel.1", + "delegate": { + "isDefaultGateway": true, + "hairpinMode": true + } + } + ], + "kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig", + "LogFile": "/var/log/multus.log", + "LogLevel": "debug" +} +``` + + +``` +[centos@kube-master-1 ~]$ cat virt-dp.yml +kind: Pod +apiVersion: v1 +metadata: + name: virt-device-plugin +spec: + nodeSelector: + dedicated: master + tolerations: + - key: node-role.kubernetes.io/master + operator: Equal + value: master + effect: NoSchedule + containers: + - name: virt-device-plugin + image: virt-device-plugin + imagePullPolicy: IfNotPresent + command: [ "/usr/bin/virtdp", "-logtostderr", "-v", "10" ] + # command: [ "/bin/bash", "-c", "--" ] + args: [ "while true; do sleep 300000; done;" ] + #securityContext: + #privileged: true + volumeMounts: + - mountPath: /var/lib/kubelet/device-plugins/ + name: devicesock + readOnly: false + - mountPath: /sys/class/net + name: net + readOnly: true + volumes: + - name: devicesock + hostPath: + # directory location on host + path: /var/lib/kubelet/device-plugins/ + - name: net + hostPath: + path: /sys/class/net + hostNetwork: true + hostPID: true +``` + +``` +[centos@kube-master-1 ~]$ cat modified.virt-crd.yaml +apiVersion: "k8s.cni.cncf.io/v1" +kind: NetworkAttachmentDefinition +metadata: + name: virt-net1 + annotations: + k8s.v1.cni.cncf.io/resourceName: kernel.org/virt +spec: + config: '{ + "type": "ehost-device", + "name": "virt-network", + "cniVersion": "0.3.0", + "deviceID": "0000:00:09.0", + "ipam": { + "type": "host-local", + "subnet": "10.56.217.0/24", + "routes": [{ + "dst": "0.0.0.0/0" + }], + "gateway": "10.56.217.1" + } +}' +``` + + +``` +[centos@kube-master-1 ~]$ cat pod-tc1.yaml +apiVersion: v1 +kind: Pod +metadata: + name: testpod1 + labels: + env: test + annotations: + k8s.v1.cni.cncf.io/networks: virt-net1 +spec: + containers: + - name: appcntr1 + image: dougbtv/centos-network + imagePullPolicy: IfNotPresent + command: [ "/bin/bash", "-c", "--" ] + args: [ "while true; do sleep 300000; done;" ] + resources: + requests: + memory: "128Mi" + kernel.org/virt: '1' + limits: + memory: "128Mi" + kernel.org/virt: '1' +``` diff --git a/contrib/multi-cluster/tmate.pl b/contrib/multi-cluster/tmate.pl new file mode 100755 index 0000000..b3fe70b --- /dev/null +++ b/contrib/multi-cluster/tmate.pl @@ -0,0 +1,90 @@ +#!/usr/bin/perl + +# Exist if there's no cluster environment variable set. +$clusters = $ENV{'CLUSTERS'}; +if ($clusters < 1) { + die("Hey, set the $CLUSTERS environment variable, and make it greater than 0"); +} + +# Initialize a markdown table. +$output = qq! +| Table Number | Primary SSH | - | Backup SSH | +| ------------ | ----------- | --- | ---------- | +!; + +# Cycle through all the clusters. +for (my $i = 1; $i < $clusters+1; $i++) { + + # Run the .tmate.sh on each host + my $command = "/home/centos/.tmate.sh"; + $tmate_ssh_result = runSSHCommand($command,$i); + # print $tmate_ssh_result."\n"; + + # Split up the lines and add to markdown table. + my @lines = split(/\n/,$tmate_ssh_result); + + $output .= "|".$i."|".$lines[0]."| <--> |".$lines[1]."|\n"; + +} + +# print $output; + +# Write the results to a file. +# my $outputfile = '/tmp/ssh.markdown'; +# open(my $fh, '>', $outputfile) or die "Could not open file '$outputfile' $!"; +# print $fh urlencode($output); +# print "Wrote output to $outputfile\n"; + +# encode it +my $encoded = urlencode($output); +my $curl_command = "curl -H \"Accept: application/json\" -X POST --data \"text=$encoded\" https://markdownshare.com/create/"; +# print "curl_command: $curl_command\n"; + +# Failed github anonymous attempt. +# $curl_command = 'curl --request POST --data {"description":"SSH access for ONS tutorial!","public":"true","files":{"README.md":{"content":"$output"}} https://api.github.com/gists'; + +$curl_result = `$curl_command`; +print "$curl_result\n"; + +# Run a command via SSH. +sub runSSHCommand { + my $command = $_[0]; + my $clusternumber = $_[1]; + my $full_command = getSSHCommand($clusternumber)." '".$command."'"; + # print $full_command."\n"; + return runCommand($full_command); +} + +# Handler to create SSH commands. +sub getSSHCommand { + my $i = $_[0]; + # Get the master & private key + my $cat_inventory = "cat inventory/multi-cluster/cluster-$i.inventory"; + my $ip = runCommand("$cat_inventory | grep kube-master | head -n1 | cut -d= -f2"); + my $key = runCommand("$cat_inventory | grep private_key | head -n1 | cut -d= -f2"); + + # Massage the ssh common args. + my $ssh_common_args = runCommand("$cat_inventory | grep common_args"); + $ssh_common_args =~ s/^.+'(.+)'$/$1/; + $ssh_common_args =~ s/ root/ -o LogLevel=ERROR root/; + + # Create the ssh command. + my $ssh_command = "ssh -i $key $ssh_common_args -o StrictHostKeyChecking=no -o LogLevel=ERROR centos\@$ip"; + return $ssh_command; +} + +# Run a command. +sub runCommand { + my $command = $_[0]; + my $result = `$command`; + $result =~ s/\s+$//; + return $result; +} + +sub urlencode { + my $s = shift; + $s =~ s/ /+/g; + $s =~ s/([^A-Za-z0-9\+-])/sprintf("%%%02X", ord($1))/seg; + return $s; +} + diff --git a/inventory/examples/image-bootstrap/extravars.yml b/inventory/examples/image-bootstrap/extravars.yml new file mode 100644 index 0000000..9e7c4f5 --- /dev/null +++ b/inventory/examples/image-bootstrap/extravars.yml @@ -0,0 +1,17 @@ +--- +# -------------------------------------------- +# bridge_networking: true +# bridge_name: br0 +# bridge_physical_nic: "enp1s0f1" +# bridge_network_name: "br0" +# bridge_network_cidr: 192.168.1.0/24 +enable_userspace_cni: true +enable_ehost_device_cni: true +enable_virt_network_device_plugin: true +ssh_proxy_enabled: true +binary_install: true +binary_kubectl_url: https://bintray.com/dougbtv/dougbtv-custom-kube/download_file?file_path=kubectl +binary_kubeadm_url: https://bintray.com/dougbtv/dougbtv-custom-kube/download_file?file_path=kubeadm +binary_kubelet_url: https://bintray.com/dougbtv/dougbtv-custom-kube/download_file?file_path=kubelet +binary_install_force_redownload: false +kubeadm_version: "v1.11.2" \ No newline at end of file diff --git a/inventory/examples/image-bootstrap/inventory b/inventory/examples/image-bootstrap/inventory new file mode 100644 index 0000000..74c9722 --- /dev/null +++ b/inventory/examples/image-bootstrap/inventory @@ -0,0 +1,16 @@ +kube-nonet-master ansible_host=192.168.1.150 +virthost ansible_host=192.168.1.111 ansible_ssh_user=root + +[master] +kube-nonet-master + +[virthost] +virthost + +[virthost:vars] +ansible_user=root + +[master:vars] +ansible_user=centos +ansible_ssh_private_key_file=/home/doug/.ssh/virthost/id_vm_rsa + diff --git a/inventory/examples/image-bootstrap/postbootstrap-extravars.yml b/inventory/examples/image-bootstrap/postbootstrap-extravars.yml new file mode 100644 index 0000000..86bd66b --- /dev/null +++ b/inventory/examples/image-bootstrap/postbootstrap-extravars.yml @@ -0,0 +1,21 @@ +--- +# -------------------------------------------- +hugepages_enabled: true +image_destination_name: bootstrapped.qcow2 +pod_network_type: "none" +bridge_networking: true +bridge_name: br0 +bridge_physical_nic: "enp1s0f1" +bridge_network_name: "br0" +bridge_network_cidr: 192.168.1.0/24 +virtual_machines: + - name: kube-master + node_type: master + system_ram_mb: 4096 + - name: kube-node-1 + node_type: nodes + system_ram_mb: 4096 + - name: kube-node-2 + node_type: nodes + system_ram_mb: 4096 +enable_userspace_cni: true \ No newline at end of file diff --git a/inventory/examples/ovs-dpdk/extra-vars.yml b/inventory/examples/ovs-dpdk/extra-vars.yml new file mode 100644 index 0000000..2f1af2e --- /dev/null +++ b/inventory/examples/ovs-dpdk/extra-vars.yml @@ -0,0 +1,40 @@ +--- +# pod network type +pod_network_type: "multus" + +# verified kube version +kube_version: "1.11.3" +kubeadm_version: "1.11.3" + +# more memory for 1GB hugepages +system_default_ram_mb: 8192 +virtual_machines: + - name: kube-master + node_type: master + - name: kube-node-1 + node_type: nodes + +# ---------------------------- +# device plugins +# ---------------------------- +enable_device_plugins: true +attach_additional_virtio_device: true + +# ---------------------------- +# userspace CNI +# ---------------------------- +enable_userspace_cni: false +enable_userspace_ovs_cni: true +enable_ehost_device_cni: true +enable_virt_network_device_plugin: true + +# vm interface won't change +multus_macvlan_master: 'eth0' + +# optional packages +optional_packages: "@Development tools" + +# disable customization +enable_compute_device: false +customize_kube_config: false +skip_init: false diff --git a/playbooks/create-bootstrapped-image.yml b/playbooks/create-bootstrapped-image.yml new file mode 100644 index 0000000..a754041 --- /dev/null +++ b/playbooks/create-bootstrapped-image.yml @@ -0,0 +1,162 @@ +--- + +- import_playbook: ka-init/init.yml + +- hosts: virthost + become: true + become_user: root + tasks: + - set_fact: + path_bootstrap_image_source: "/home/images/bootstrapkubemaster/bootstrapkubemaster.qcow2" + path_bootstrap_image_dest: "/home/images/bootstrapped.qcow2" + spare_disk_attach: false + virtual_machines: + - name: bootstrapkubemaster + node_type: master + bootstrap_common_args: "" + ssh_proxy_enabled: false + ssh_proxy_user: root + +- import_playbook: virthost-setup.yml + +- hosts: virthost + tasks: + + - name: Get created VM's IP to use in dynamic inventory + set_fact: + bootstrap_use_ip: "{{ vm_ips_dict.bootstrapkubemaster }}" + + - name: "Add proxy command if set" + set_fact: + bootstrap_common_args: > + -o ProxyCommand="ssh{% if ssh_proxy_port is defined %} -p {{ ssh_proxy_port }}{% endif %} -W %h:%p {{ ssh_proxy_user }}@{{ ssh_proxy_host }}" + when: "ssh_proxy_enabled" + + - name: Add a host to the inventory so we can install kube deps on it. + add_host: + hostname: "bootstrapkubemaster" + ansible_ssh_host: "{{ bootstrap_use_ip }}" + groups: master + ansible_ssh_private_key_file: "{{ vm_ssh_key_path }}" + ansible_ssh_common_args: "{{ bootstrap_common_args }}" + ansible_user: "centos" + +- hosts: all + tasks: + - name: Express that we'd like to not init the cluster, initialize other variables. + set_fact: + skip_init: true + pre_pull_images: + - "centos:centos7" + # - "centos:tools" + - "nfvpe/multus:deviceid" + - "dougbtv/centos-network" + # Update this image for userspace. + - "bmcfall/vpp-centos-userspace-cni:0.4.0" + - "quay.io/coreos/flannel:v0.10.0-amd64" + - "nfvpe/virtdp:latest" + - "nfvpe/kube-api-server-amd64:deviceid" + +- import_playbook: kube-install.yml + +- hosts: master + become: true + become_user: root + tasks: + + - name: Install tmux + yum: + name: tmux + state: present + + - name: Download tmate binary + unarchive: + src: https://github.com/tmate-io/tmate/releases/download/2.2.1/tmate-2.2.1-static-linux-amd64.tar.gz + dest: /tmp/ + remote_src: yes + + - name: Move tmate binary + shell: > + mv /tmp/tmate-2.2.1-static-linux-amd64/tmate /usr/local/bin/tmate + + - name: Gen an ssh key + shell: > + ssh-keygen -b 2048 -t rsa -f /home/centos/.ssh/id_rsa -q -N "" + creates: /home/centos/.ssh/id_rsa + + - name: Clone virt-network-device-plugin locally + git: + repo: https://github.com/zshi-redhat/virt-network-device-plugin.git + dest: "/home/centos/virt-network-device-plugin" + force: yes + + - name: Clone Multus locally + git: + repo: https://github.com/intel/multus-cni.git + dest: /home/centos/multus-cni + force: yes + + - name: Modify Multus to use :deviceid tagged image (for SR-IOV tutorial) + shell: > + sed -i -e 's|nfvpe/multus:latest|nfvpe/multus:deviceid|' /home/centos/multus-cni/images/multus-daemonset.yml + + - name: Pull kubeadm images + shell: > + kubeadm config images pull --kubernetes-version={{ kubeadm_version }} + + - name: Pre-pull necessary docker images + shell: > + docker pull {{ item }} + with_items: "{{ pre_pull_images }}" + + - name: Jimmy in the custom api server image + shell: > + docker tag k8s.gcr.io/kube-apiserver-amd64:{{ kubeadm_version }} k8s.gcr.io/kube-apiserver-amd64:orig.{{ kubeadm_version }}; + docker rmi k8s.gcr.io/kube-apiserver-amd64:{{ kubeadm_version }}; + docker tag docker.io/nfvpe/kube-api-server-amd64:deviceid k8s.gcr.io/kube-apiserver-amd64:{{ kubeadm_version }} + + - name: Re-install cloud-init goodies + yum: + name: "{{ item }}" + state: present + with_items: + - cloud-init + - cloud-utils + - cloud-utils-growpart + + - name: Delete the cloud-init dir and re-create dir + file: + path: /var/lib/cloud + state: "{{ item }}" + with_items: + - absent + - directory + + - name: Remove hostname + shell: > + hostnamectl set-hostname "" + + - name: "Power off machine" + shell: "sleep 2 && poweroff" + async: 1 + poll: 0 + +- hosts: virthost + tasks: + + - name: Ensure virt-sysprep is installed + yum: + name: "libguestfs-tools-c" + state: present + + - name: Copy bootstrapped image + shell: > + cp -f {{ path_bootstrap_image_source }} {{ path_bootstrap_image_dest }} + + # http://manpages.ubuntu.com/manpages/xenial/man1/virt-sysprep.1.html + - name: Run virt-sysprep to strip persistent stuff from image + shell: > + virt-sysprep -a {{ path_bootstrap_image_dest }} + + - debug: + msg: "Bootstrapped image qcow2 copied to: {{ path_bootstrap_image_dest }}" diff --git a/playbooks/ehost-device-cni.yml b/playbooks/ehost-device-cni.yml new file mode 100644 index 0000000..df1a779 --- /dev/null +++ b/playbooks/ehost-device-cni.yml @@ -0,0 +1,10 @@ +--- +- import_playbook: ka-init/init.yml + +- hosts: master + become: true + become_user: root + tasks: [] + roles: + # - { role: install-go } + - { role: ehost-device-cni } diff --git a/playbooks/ka-init/group_vars/all.yml b/playbooks/ka-init/group_vars/all.yml index 9d6190e..adc4de6 100644 --- a/playbooks/ka-init/group_vars/all.yml +++ b/playbooks/ka-init/group_vars/all.yml @@ -22,10 +22,24 @@ pod_network_type: "flannel" pod_network_cidr: "10.244.0.0" # General config +# require empty dict by default +proxy_env: {} -# At 1.7.2 you need this cause of a bug in kubeadm join. -# Turn it off later, or, try it if a join fails. -skip_preflight_checks: true +## if proxies are needed set them in proxy_env dictionary +# HTTP proxy full URL +# !!!! NOTE ansible does not support https:// for https_proxy, only http:// +# Configure socks proxy if required for git:// protocol +# If in proxy env, uncomment no_proxy as its used to make exceptions and specify, +# which domains or IP addresses should be reached directly + +#proxy_env: +# http_proxy: http://proxy.example.com:8080 +# https_proxy: http://proxy.example.com:8080 +# socks_proxy: http://proxy.example.com:1080 +# no_proxy: "localhost,127.0.0.1,10.244.0.0/16,10.96.0.0/12,192.168.122.0/24,.intel.com" + +# Sometimes, you gotta skip 'em, but, by default we try not to. +skip_preflight_checks: false # Stable. (was busted at 1.6 release, may work now, untested for a couple months) kube_baseurl: http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 @@ -41,10 +55,11 @@ kube_baseurl: http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 # kube_version: 1.7.5-0 # The default is... "latest" kube_version: "latest" +kubeadm_version: "" # Binary install # Essentially replaces the RPM installed binaries with a specific set of binaries from URLs. -# binary_install: true +# binary_install: false # binary_install_force_redownload: false # binary_kubectl_url: https://github.com/leblancd/kubernetes/releases/download/v1.9.0-alpha.1.ipv6.1b/kubectl # binary_kubeadm_url: https://github.com/leblancd/kubernetes/releases/download/v1.9.0-alpha.1.ipv6.1b/kubeadm @@ -80,6 +95,8 @@ kubectl_proxy_port: 8088 # Allow the kubernetes control plane to listen on all interfaces #control_plane_listen_all: true +customize_kube_config: false + # --------------------------- - # multus-cni vars - - # ------------------------- - @@ -136,6 +153,16 @@ ipv6_enabled: false # device plugins # ---------------------------- enable_device_plugins: false +enable_compute_device: false +attach_additional_virtio_device: false + +# ---------------------------- +# userspace CNI +# ---------------------------- +enable_userspace_cni: false +enable_userspace_ovs_cni: false +enable_ehost_device_cni: false +enable_virt_network_device_plugin: false # ---------------------------- # builder vars diff --git a/playbooks/kube-init.yml b/playbooks/kube-init.yml new file mode 100644 index 0000000..e7ef51a --- /dev/null +++ b/playbooks/kube-init.yml @@ -0,0 +1,77 @@ +--- +- import_playbook: ka-init/init.yml +- import_playbook: ka-lab-ipv6/ipv6-lab.yml + when: ipv6_enabled + +- hosts: master,nodes + become: true + become_user: root + tasks: + - name: Set bridge-nf-call-iptables to 1 (yes, this is redundant, earlier steps may not have been sticky) + shell: > + echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables + ignore_errors: true + +- hosts: master + become: true + become_user: root + # pre_tasks: + # - debug: + # msg: "Skip init? {{ skip_init }}" + roles: + - { role: kube-init } + - { role: kube-template-cni } + +- hosts: master + tasks: [] + roles: + - { role: kube-cni } + +- hosts: master + tasks: [] + roles: + - { role: multus-crd, when: "pod_network_type == 'multus' and multus_use_crd and not multus_npwg_demo and not skip_init"} + +- hosts: nodes + become: true + become_user: root + pre_tasks: + - name: Get kubeadm_join_command from master + set_fact: + kubeadm_join_command: "{{ hostvars[groups['master'][0]]['kubeadm_join_command'] }}" + tasks: [] + roles: + - { role: kube-join-cluster } + +- hosts: master,nodes + become: true + become_user: root + tasks: [] + roles: + # - { role: kubectl-proxy-systemd } + - { role: modified-kube-config, when: customize_kube_config } + +- hosts: master + become: true + become_user: root + tasks: [] + roles: + # - { role: kubectl-proxy-systemd } + - { role: tmate } + +- hosts: master + tasks: + - name: Get API server pod name + shell: > + kubectl get pods --all-namespaces | grep -i apiserver | awk '{print $2}' + register: apiserver_name + until: apiserver_name.stdout | search ("apiserver") + retries: 60 + delay: 3 + ignore_errors: yes + + when: customize_kube_config + - name: Remove API server pod to restart it + shell: > + kubectl delete pod {{ apiserver_name.stdout }} --namespace=kube-system + when: customize_kube_config diff --git a/playbooks/kube-install.yml b/playbooks/kube-install.yml index 964a49b..a5bbcf6 100644 --- a/playbooks/kube-install.yml +++ b/playbooks/kube-install.yml @@ -3,23 +3,39 @@ - import_playbook: ka-lab-ipv6/ipv6-lab.yml when: ipv6_enabled +- hosts: all + tasks: + - set_fact: + skip_init: false + when: skip_init is undefined + - hosts: master,nodes become: true become_user: root tasks: [] + pre_tasks: + - set_fact: + calc_proxy_env: "{{ proxy_env|calculate_no_proxy }}" + - group_by: key={{ ansible_os_family }} roles: + - { role: set-proxy, when: calc_proxy_env } - { role: bridge-setup, when: pod_network_type == 'bridge' or pod_network_type == 'kokonet-bridge' } - { role: kokonet-setup, when: pod_network_type == 'kokonet-bridge' } - { role: npwg-poc1-setup, when: pod_network_type == 'multus' and multus_npwg_demo } - { role: optional-packages } - - { role: install-go, when: container_runtime == 'crio' } # You can add "crio_force: true" if you need to run the builds again. - { role: cri-o-install, when: container_runtime == 'crio', crio_force: false } - { role: buildah-install, when: container_runtime == 'crio' } - { role: install-docker, when: container_runtime == 'docker' } - { role: kube-install } - { role: multus-cni, when: pod_network_type == "multus" } + - { role: userspace-cni, when: enable_userspace_cni } + - { role: ehost-device-cni, when: enable_ehost_device_cni } + - { role: virt-network-device-plugin, when: enable_virt_network_device_plugin } + +- import_playbook: userspace-ovs-cni.yml + when: enable_userspace_ovs_cni - import_playbook: ka-lab-kokonet/kokonet-lab.yml when: pod_network_type == 'kokonet-bridge' @@ -31,29 +47,29 @@ - name: Set bridge-nf-call-iptables to 1 shell: > echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables + ignore_errors: true - hosts: master become: true become_user: root - tasks: [] + # pre_tasks: + # - debug: + # msg: "Skip init? {{ skip_init }}" roles: - - { role: kube-init } - - { role: kube-template-cni } - -# ---- placeholder: kube-cni -# without become. + - { role: kube-init, when: "not skip_init" } + - { role: kube-template-cni, when: "not skip_init" } - hosts: master tasks: [] roles: - - { role: kube-cni } + - { role: kube-cni, when: "not skip_init" } - { role: kube-niceties } - hosts: master tasks: [] roles: - - { role: multus-crd, when: "pod_network_type == 'multus' and multus_use_crd and not multus_npwg_demo"} + - { role: multus-crd, when: "pod_network_type == 'multus' and multus_use_crd and not multus_npwg_demo and not skip_init"} - hosts: nodes become: true @@ -64,11 +80,11 @@ kubeadm_join_command: "{{ hostvars[groups['master'][0]]['kubeadm_join_command'] }}" tasks: [] roles: - - { role: kube-join-cluster } + - { role: kube-join-cluster, when: "not skip_init" } - hosts: master become: true become_user: root tasks: [] roles: - - { role: kubectl-proxy-systemd } + - { role: kubectl-proxy-systemd, when: "not skip_init" } diff --git a/playbooks/userspace-cni.yml b/playbooks/userspace-cni.yml new file mode 100644 index 0000000..b3c9879 --- /dev/null +++ b/playbooks/userspace-cni.yml @@ -0,0 +1,10 @@ +--- +- import_playbook: ka-init/init.yml + +- hosts: master + become: true + become_user: root + tasks: [] + roles: + - { role: install-go } + - { role: userspace-cni } diff --git a/playbooks/userspace-ovs-cni.yml b/playbooks/userspace-ovs-cni.yml new file mode 100644 index 0000000..b8cd961 --- /dev/null +++ b/playbooks/userspace-ovs-cni.yml @@ -0,0 +1,15 @@ +--- +- import_playbook: ka-init/init.yml + +- hosts: master,nodes + roles: + - role: optional-packages + vars: + optional_packages: + - "@Development tools" + - libcap-ng + - openssl + - numactl-devel + - git + - { role: ansible-ovs-dpdk, ovs_version: 'v2.10.0', dpdk_version: 'v17.11' } + - { role: userspace-ovs-cni } diff --git a/playbooks/virt-network-device-plugin.yaml b/playbooks/virt-network-device-plugin.yaml new file mode 100644 index 0000000..c8b535b --- /dev/null +++ b/playbooks/virt-network-device-plugin.yaml @@ -0,0 +1,10 @@ +--- +- import_playbook: ka-init/init.yml + +- hosts: master + become: true + become_user: root + tasks: [] + roles: + # - { role: install-go } + - { role: virt-network-device-plugin } diff --git a/playbooks/virthost-setup.yml b/playbooks/virthost-setup.yml index 9c9e297..75656aa 100644 --- a/playbooks/virthost-setup.yml +++ b/playbooks/virthost-setup.yml @@ -3,6 +3,18 @@ - hosts: virthost tasks: [] - + post_tasks: + - name: Get master + set_fact: + use_master: "{{ item.name }}" + when: "item.node_type == 'master'" + with_items: "{{ virtual_machines }}" + - debug: + msg: > + virsh attach-interface --domain {{ use_master }} --type bridge --model virtio --source virbr0 --config --live + - name: Add additional interface to master + shell: > + virsh attach-interface --domain {{ use_master }} --type bridge --model virtio --source virbr0 --config --live + when: attach_additional_virtio_device roles: - { role: redhat-nfvpe.vm-spinup } diff --git a/requirements.yml b/requirements.yml index f2193cb..842933f 100644 --- a/requirements.yml +++ b/requirements.yml @@ -1,5 +1,5 @@ --- -- src: https://github.com/redhat-nfvpe/ansible-role-install-go +- src: https://github.com/gantsign/ansible-role-golang name: install-go version: master - src: https://github.com/redhat-nfvpe/ansible-role-install-docker diff --git a/roles/ansible-ovs-dpdk/README.md b/roles/ansible-ovs-dpdk/README.md new file mode 100644 index 0000000..a68dc56 --- /dev/null +++ b/roles/ansible-ovs-dpdk/README.md @@ -0,0 +1,83 @@ +# Ansible Playbook to Build Open vSwitch with DPDK support + +This playbook installs Open vSwitch with DPDK support. + +## Quick Start +Ensure that you have [installed Ansible](http://docs.ansible.com/ansible/intro_installation.html) on the host where you want to run the playbook from. + +This playbook has been tested against Fedora 22. + +To run the playbook against a host 192.168.1.100 (note the comma following the host name/IP address must be included): + +```bash +$ git clone https://github.com/mixja/ansible-ovs-dpdk.git +... +... +$ cd ansible-ovs-dpdk +ansible-dpdk-seastar$ ansible-playbook -i "192.168.1.100," site.yml +SSH password: ******* + +PLAY [Provision Custom Facts] ************************************************* +... +... +``` + +## Changing Folder and Repo Settings + +The `group_vars/all` file contains the following variables: + +- `dpdk_dir` - root folder of the DPDK source +- `dpdk_build` - build folder for the DPDK source +- `dpdk_repo` - Git repo of the DPDK source +- `ovs_dir` - root folder of the OVS source +- `ovs_repo` - Git repo of the OVS source + +## Changing Build Settings + +The following variables can be used to force a rebuild or build a different version: + +- `ovs_rebuild` - if set to any value, forces OVS to be built. +- `ovs_version` - specifies the branch, tag or commit hash to build. If a change is detected from the current repo, OVS will be rebuilt. +- `dpdk_rebuild` - if set to any value, forces DPDK to be built. +- `dpdk_version` - specifies the branch, tag or commit hash to build. If a change is detected from the current repo, DPDK will be rebuilt. +- `dpdk_device_name` - defines the device name to use for DPDK UIO/VFIO scripts. The default value is `eno1` if not specified. + +The following example forces DPDK to be built: + +```bash +$ ansible-playbook -i "192.168.1.100," site.yml --extra-vars "dpdk_rebuild=true" +``` + +The following example checks out OVS commit abc1234 to be checked out and forces a build of OVS: + +```bash +$ ansible-playbook -i "192.168.1.100," site.yml --extra-vars "ovs_rebuild=true ovs_version=abc1234" +``` + +## Testing OVS DPDK + +After DPDK and OVS are built you can use the following helper scripts: + +### Load DPDK kernel module and bind network interface + +Choose one of the following options: + +- `/root/dpdk_uio.sh` - downs the network interface, inserts the UIO kernel module and binds DPDK to the network interface +- `/root/dpdk_vfio.sh` - downs the network interface, inserts the VFIO_PCI kernel module and binds DPDK to the network interface + +### Init and start OVS + +- `/root/start_ovsdb_server.sh` - starts OVSDB server +- `/root/start_ovs_vswitchd.sh` - starts OVS vswitchd with DPDK support enabled + +### Create OVS bridges and ports + +Create an OVS bridge with the datapath_type "netdev": + +`ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev` + +Add DPDK devices: + +`ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk` + +See the [OVS DPDK README](https://github.com/openvswitch/ovs/blob/master/INSTALL.DPDK.md) for further information. \ No newline at end of file diff --git a/roles/ansible-ovs-dpdk/defaults/main.yml b/roles/ansible-ovs-dpdk/defaults/main.yml new file mode 100644 index 0000000..08d155a --- /dev/null +++ b/roles/ansible-ovs-dpdk/defaults/main.yml @@ -0,0 +1,9 @@ +--- +dpdk_dir: /usr/src/dpdk +dpdk_build: '{{ dpdk_dir }}/x86_64-native-linuxapp-gcc' +dpdk_repo: https://github.com/DPDK/dpdk + +ovs_dir: /usr/src/ovs +ovs_repo: https://github.com/openvswitch/ovs.git + +nr_hugepages: 4 diff --git a/roles/ansible-ovs-dpdk/tasks/dpdk.yml b/roles/ansible-ovs-dpdk/tasks/dpdk.yml new file mode 100644 index 0000000..6894ea1 --- /dev/null +++ b/roles/ansible-ovs-dpdk/tasks/dpdk.yml @@ -0,0 +1,15 @@ +--- +- name: Checkout patched DPDK + git: > + repo={{ dpdk_repo }} + dest={{ dpdk_dir }} + version={{ dpdk_version | default("master") }} + update=no + force=yes + register: dpdk_changed +- name: Check if DPDK build exists + stat: path={{ dpdk_build }} + register: dpdk_build_status +- name: Build DPDK + command: make install T=x86_64-native-linuxapp-gcc chdir={{ dpdk_dir }} + when: (dpdk_build_status.stat.isdir is not defined) or (dpdk_rebuild is defined) or dpdk_changed.changed diff --git a/roles/ansible-ovs-dpdk/tasks/hugepages.yml b/roles/ansible-ovs-dpdk/tasks/hugepages.yml new file mode 100644 index 0000000..005e839 --- /dev/null +++ b/roles/ansible-ovs-dpdk/tasks/hugepages.yml @@ -0,0 +1,45 @@ +--- +- name: Fetch default kernel + command: grubby --default-kernel + register: default_kernel +- name: Check existing hugepages + command: cat /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages + register: huge_1G +- name: Update GRUB configuration + command: "grubby --update-kernel {{ default_kernel.stdout }} --args 'iommu=pt intel_iommu=on default_hugepagesz=1G hugepagesz=1G hugepages={{ nr_hugepages }}'" +- name: Add hugepages mount to fstab - 1G + lineinfile: > + dest=/etc/fstab + line='hugetlbfs /dev/hugepages hugetlbfs pagesize=1G 0 0' + insertafter=EOF +- name: Reboot server + block: + - name: Schedule reboot + command: /usr/bin/systemd-run --on-active=5 /usr/bin/systemctl reboot + async: 0 + poll: 0 + ignore_errors: true + - name: "Wait until {{ ansible_host }} ssh is DOWN" + wait_for: + host: "{{ ansible_host }}" + state: stopped + port: 22 + timeout: 60 + delay: 5 + delegate_to: localhost + - name: "Wait until {{ ansible_host }} ssh is UP" + wait_for: + host: "{{ ansible_host }}" + state: started + port: 22 + timeout: 60 + delay: 5 + delegate_to: localhost + - name: "Wait until {{ ansible_host }} system is READY" + command: systemctl list-jobs + ignore_errors: true + register: result + until: result.stdout.find("No jobs running.") != -1 + retries: 6 + delay: 5 + when: "huge_1G.stdout != {{ nr_hugepages }}" diff --git a/roles/ansible-ovs-dpdk/tasks/main.yml b/roles/ansible-ovs-dpdk/tasks/main.yml new file mode 100644 index 0000000..8c0484d --- /dev/null +++ b/roles/ansible-ovs-dpdk/tasks/main.yml @@ -0,0 +1,4 @@ +--- +- include: hugepages.yml +- include: dpdk.yml +- include: ovs.yml diff --git a/roles/ansible-ovs-dpdk/tasks/ovs.yml b/roles/ansible-ovs-dpdk/tasks/ovs.yml new file mode 100644 index 0000000..9eedab1 --- /dev/null +++ b/roles/ansible-ovs-dpdk/tasks/ovs.yml @@ -0,0 +1,87 @@ +--- +- name: Checkout OVS + git: > + repo={{ ovs_repo }} + dest={{ ovs_dir }} + version={{ ovs_version | default("master") }} + register: ovs_changed +- name: Check if OVS configure script exists + stat: path={{ ovs_dir }}/configure + register: ovs_config_status +- name: Bootstrap OVS if required + command: ./boot.sh chdir={{ ovs_dir }} + when: ovs_config_status.stat.exists == false or (ovs_rebuild is defined) or ovs_changed.changed +- name: Check if OVS Makefile exists + stat: path={{ ovs_dir }}/Makefile + register: ovs_makefile_status +- name: Configure OVS + command: ./configure --with-dpdk={{ dpdk_build }} CFLAGS="-g -O2 -Wno-cast-align" chdir={{ ovs_dir }} + when: ovs_makefile_status.stat.exists == false or (ovs_rebuild is defined) or ovs_changed.changed +- name: Check if OVS distribution files exists + stat: path={{ ovs_dir }}/distfiles + register: ovs_distfiles_status +- name: Build OVS + command: make CFLAGS='-O3 -march=native' chdir={{ ovs_dir }} + when: ovs_distfiles_status.stat.exists == false or (ovs_rebuild is defined) or ovs_changed.changed +- name: Check if OVS tools are installed + stat: path=/usr/local/bin/ovsdb-tool + register: ovs_tools_status +- name: Install OVS tools + command: make install chdir={{ ovs_dir }} + when: ovs_tools_status.stat.exists == false or (ovs_rebuild is defined) or ovs_changed.changed +- name: Create folders + file: path={{ item }} state=directory + with_items: + - /usr/local/etc/openvswitch + - /usr/local/var/run/openvswitch +- name: Clear database configuration if required + file: path=/usr/local/etc/openvswitch/conf.db state=absent + when: ovs_rebuild is defined or ovs_changed.changed +- name: Check if database configuration exists + stat: path=/usr/local/etc/openvswitch/conf.db + register: ovs_dbconfig_status +- name: Create database configuration + command: /usr/local/bin/ovsdb-tool create /usr/local/etc/openvswitch/conf.db /usr/local/share/openvswitch/vswitch.ovsschema + when: ovs_dbconfig_status.stat.exists == false +- name: Start OVS database server + command: /usr/local/share/openvswitch/scripts/ovs-ctl --no-ovs-vswitchd start +- name: Configure OVS dpdk-socket-mem + openvswitch_db: + table: open_vswitch + record: . + col: other_config + key: dpdk-socket-mem + value: "2048,0" +# command: '/usr/local/bin/ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="2048,0"' +- name: Configure OVS dpdk-init + openvswitch_db: + table: open_vswitch + record: . + col: other_config + key: dpdk-init + value: true +# command: '/usr/local/bin/ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true' +- name: Configure OVS pmd-cpu-mask + openvswitch_db: + table: open_vswitch + record: . + col: other_config + key: pmd-cpu-mask + value: 0x3 +# command: '/usr/local/bin/ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0x3' +- name: Configure OVS dpdk-lcore-mask + openvswitch_db: + table: open_vswitch + record: . + col: other_config + key: dpdk-lcore-mask + value: 0xc +# command: '/usr/local/bin/ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0xc' +- name: Start OVS daemon + command: /usr/local/share/openvswitch/scripts/ovs-ctl --no-ovsdb-server --db-sock="/usr/local/var/run/openvswitch/db.sock" restart +- name: Add OVS bridge + openvswitch_bridge: + bridge: br0 + state: present + set: 'bridge br0 datapath_type=netdev' +# command: /usr/local/bin/ovs-vsctl --may-exist add-br br0 -- set bridge br0 datapath_type=netdev diff --git a/roles/cri-o-install/meta/main.yml b/roles/cri-o-install/meta/main.yml index 0e56d5c..eda416c 100644 --- a/roles/cri-o-install/meta/main.yml +++ b/roles/cri-o-install/meta/main.yml @@ -1,3 +1,3 @@ --- dependencies: - - install-go + - { role: install-go, golang_gopath: '/root/go' } diff --git a/roles/ehost-device-cni/tasks/main.yml b/roles/ehost-device-cni/tasks/main.yml new file mode 100644 index 0000000..dee02d2 --- /dev/null +++ b/roles/ehost-device-cni/tasks/main.yml @@ -0,0 +1,25 @@ +--- + +- name: Ensure git is installed + yum: + name: git + state: present + +- name: Clone ehost-device-cni Plugin repo + git: + repo: https://github.com/zshi-redhat/ehost-device-cni.git + dest: /usr/src/ehost-device-cni + force: true + +- name: Build it + shell: > + source /etc/profile.d/golang.sh;./build.sh + args: + chdir: /usr/src/ehost-device-cni + +- name: Copy Userspace CNI plugin to CNI bin dir + shell: > + cp bin/ehost-device /opt/cni/bin/ehost-device + args: + chdir: /usr/src/ehost-device-cni + creates: "/opt/cni/bin/ehost-device" diff --git a/roles/kube-init/defaults/main.yml b/roles/kube-init/defaults/main.yml index 192f1a7..f24e288 100644 --- a/roles/kube-init/defaults/main.yml +++ b/roles/kube-init/defaults/main.yml @@ -4,3 +4,4 @@ kubectl_home: /home/centos artifacts_install: false ipv6_enabled: false control_plane_listen_all: false +ignore_preflight_version: false \ No newline at end of file diff --git a/roles/kube-init/tasks/main.yml b/roles/kube-init/tasks/main.yml index 380bdee..b5d6d19 100644 --- a/roles/kube-init/tasks/main.yml +++ b/roles/kube-init/tasks/main.yml @@ -27,12 +27,21 @@ k8s_version: "--kubernetes-version {{ kube_version }}" when: artifacts_install +- name: Default preflight ignore argument + set_fact: + arg_ignore: "" + +- name: Set preflight ignore argument when enabled + set_fact: + arg_ignore: --ignore-preflight-errors=KubernetesVersion,KubeletVersion + when: ignore_preflight_version + # Was trying to use flannel and running with: # kubeadm init > /etc/kubeadm.init.txt # abandonded for now... - name: Run kubeadm init shell: > - kubeadm init {{ k8s_version }} {{ arg_crio }} --config=/root/kubeadm.cfg > /var/log/kubeadm.init.log + kubeadm init {{ k8s_version }} {{ arg_crio }} {{ arg_ignore }} --config=/root/kubeadm.cfg > /var/log/kubeadm.init.log args: creates: /etc/.kubeadm-complete @@ -68,18 +77,18 @@ - name: Copy admin.conf to kubectl user's home shell: > - cp -f /etc/kubernetes/admin.conf {{ kubectl_home }}/.kube/admin.conf + cp -f /etc/kubernetes/admin.conf {{ kubectl_home }}/.kube/config args: - creates: "{{ kubectl_home }}/admin.conf" + creates: "{{ kubectl_home }}/config" - name: Set admin.conf ownership file: - path: "{{ kubectl_home }}/.kube/admin.conf" + path: "{{ kubectl_home }}/.kube/config" owner: "{{ kubectl_user }}" group: "{{ kubectl_group }}" -- name: Add KUBECONFIG env for admin.conf to .bashrc +- name: Add KUBECONFIG env for config to .bashrc lineinfile: dest: "{{ kubectl_home }}/.bashrc" regexp: "KUBECONFIG" - line: "export KUBECONFIG={{ kubectl_home }}/.kube/admin.conf" + line: "export KUBECONFIG={{ kubectl_home }}/.kube/config" diff --git a/roles/kube-init/templates/kubeadm.cfg.j2 b/roles/kube-init/templates/kubeadm.cfg.j2 index 6ded8ec..ea2ac4e 100644 --- a/roles/kube-init/templates/kubeadm.cfg.j2 +++ b/roles/kube-init/templates/kubeadm.cfg.j2 @@ -1,6 +1,9 @@ # Full parameters @ https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/ -apiVersion: kubeadm.k8s.io/v1alpha1 +apiVersion: kubeadm.k8s.io/v1alpha2 kind: MasterConfiguration +{% if kubeadm_version != "" %} +kubernetesVersion: {{ kubeadm_version }} +{% endif %} {% if control_plane_listen_all %} controllerManagerExtraArgs: address: 0.0.0.0 @@ -11,6 +14,10 @@ schedulerExtraArgs: apiServerExtraArgs: feature-gates: DevicePlugins=true {% endif %} +{% if enable_compute_device %} +apiServerExtraArgs: + feature-gates: ComputeDevice=true +{% endif %} {% if ipv6_enabled %} api: advertiseAddress: fd00::100 diff --git a/roles/kube-install/tasks/binary_install.yml b/roles/kube-install/tasks/binary_install.yml index 62b9e7c..d6a6bd7 100644 --- a/roles/kube-install/tasks/binary_install.yml +++ b/roles/kube-install/tasks/binary_install.yml @@ -20,12 +20,33 @@ item.url_is_set and (download_complete_semaphor.stat.exists == False or binary_install_force_redownload) +- name: Default the proxy environment + set_fact: + use_proxy: "" + +- name: Set proxy environment + set_fact: + use_proxy: "export http_proxy={{ http_proxy }} && " + when: http_proxy != "" + +- name: Default the proxy environment + set_fact: + use_https_proxy: "" + +- name: Set proxy environment + set_fact: + use_https_proxy: "export https_proxy={{ https_proxy }} && " + when: https_proxy != "" + + - name: Download kubelet/kubectl/kubeadm - get_url: - url: "{{ item.use_url }}" - dest: "{{ item.to_path }}" - mode: 0755 - force: "{{ binary_install_force_redownload }}" + shell: > + {{use_proxy}}{{use_https_proxy}}curl -L {{ item.use_url }} -o {{ item.to_path }} + # get_url: + # url: "{{ item.use_url }}" + # dest: "{{ item.to_path }}" + # mode: 0755 + # force: "{{ binary_install_force_redownload }}" when: binary_kubelet_url is defined with_items: - use_url: "{{ binary_kubelet_url }}" @@ -35,6 +56,17 @@ - use_url: "{{ binary_kubectl_url }}" to_path: "/usr/bin/kubectl" +- name: Set binary path perms + file: + path: "{{ item.path }}" + mode: 0755 + with_items: + - path: /usr/bin/kubelet + - path: /usr/bin/kubectl + - path: /usr/bin/kubeadm + when: > + binary_kubelet_url is defined + - name: Mark download complete file: path: "{{ kubectl_home }}/.kube-binary-download-complete" diff --git a/roles/kube-install/tasks/system_setup.yml b/roles/kube-install/tasks/system_setup.yml index b749612..dc3419d 100644 --- a/roles/kube-install/tasks/system_setup.yml +++ b/roles/kube-install/tasks/system_setup.yml @@ -2,14 +2,17 @@ - name: "Disable SELinux :(" selinux: state: disabled + register: selinux - name: "reboot machine" shell: "sleep 2 && reboot" async: 1 poll: 0 + when: selinux.reboot_required - name: "Wait for VM up" local_action: wait_for host={{ ansible_host }} port=22 delay=30 + when: selinux.reboot_required - name: "Stop iptables :(" service: diff --git a/roles/kube-join-cluster/defaults/main.yml b/roles/kube-join-cluster/defaults/main.yml new file mode 100644 index 0000000..4c892af --- /dev/null +++ b/roles/kube-join-cluster/defaults/main.yml @@ -0,0 +1,2 @@ +--- +ignore_preflight_version: false \ No newline at end of file diff --git a/roles/kube-join-cluster/tasks/main.yml b/roles/kube-join-cluster/tasks/main.yml index 70d07b8..0f5c9f1 100644 --- a/roles/kube-join-cluster/tasks/main.yml +++ b/roles/kube-join-cluster/tasks/main.yml @@ -12,10 +12,16 @@ register: modified_command when: container_runtime == "crio" or skip_preflight_checks +- name: Change the join command when ignoring preflight version + shell: > + echo {{ kubeadm_join_command }} | sed -e 's/join/join --ignore-preflight-errors=KubernetesVersion,KubeletVersion /' + register: modified_command + when: ignore_preflight_version + - name: Change the kubeadm_join_command fact when crio set_fact: kubeadm_join_command: "{{ modified_command.stdout }}" - when: container_runtime == "crio" or skip_preflight_checks + when: container_runtime == "crio" or skip_preflight_checks or ignore_preflight_version - name: Join each node to the master with the join command shell: > diff --git a/roles/modified-kube-config/tasks/main.yml b/roles/modified-kube-config/tasks/main.yml new file mode 100644 index 0000000..04866b4 --- /dev/null +++ b/roles/modified-kube-config/tasks/main.yml @@ -0,0 +1,21 @@ +--- + +- name: Remove NodeRestriction from api server + lineinfile: + path: /etc/kubernetes/manifests/kube-apiserver.yaml + regexp: NodeRestriction + state: absent + +- name: Add ComputeDevice featureGates to kubelet config + blockinfile: + path: /var/lib/kubelet/config.yaml + block: | + featureGates: + ComputeDevice: true + marker: "# {mark} ANSIBLE MANAGED BLOCK" + insertbefore: cgroupDriver + +- name: Restart the kubelet + service: + name: kubelet + state: restarted diff --git a/roles/multus-cni/meta/main.yml b/roles/multus-cni/meta/main.yml index 0e56d5c..eda416c 100644 --- a/roles/multus-cni/meta/main.yml +++ b/roles/multus-cni/meta/main.yml @@ -1,3 +1,3 @@ --- dependencies: - - install-go + - { role: install-go, golang_gopath: '/root/go' } diff --git a/roles/multus-cni/tasks/main.yml b/roles/multus-cni/tasks/main.yml index ef26561..09762bb 100644 --- a/roles/multus-cni/tasks/main.yml +++ b/roles/multus-cni/tasks/main.yml @@ -15,7 +15,7 @@ - name: Compile cni-plugins shell: > - ./build.sh + source /etc/profile.d/golang.sh;./build.sh args: chdir: /usr/src/cni-plugins when: cni_clone.changed @@ -30,7 +30,7 @@ - name: Compile multus-cni shell: > - ./build + source /etc/profile.d/golang.sh;./build args: chdir: /usr/src/multus-cni when: multus_clone.changed or force_multus_rebuild is defined @@ -44,7 +44,7 @@ - name: Compile sriov-cni shell: > - ./build + source /etc/profile.d/golang.sh;./build args: chdir: /usr/src/sriov-cni when: sriov_clone.changed diff --git a/roles/multus-crd/defaults/main.yml b/roles/multus-crd/defaults/main.yml index b5ad19a..8840579 100644 --- a/roles/multus-crd/defaults/main.yml +++ b/roles/multus-crd/defaults/main.yml @@ -1,3 +1,3 @@ --- -crd_namespace: "kubernetes.com" -multus_legacy: false \ No newline at end of file +crd_namespace: "k8s.cni.cncf.io" +multus_legacy: false diff --git a/roles/multus-crd/tasks/main.yml b/roles/multus-crd/tasks/main.yml index 8f5e118..615b64e 100644 --- a/roles/multus-crd/tasks/main.yml +++ b/roles/multus-crd/tasks/main.yml @@ -25,6 +25,15 @@ dest: "macvlan.yml" - src: clusterrole.yml.j2 dest: "clusterrole.yml" + - src: userspace-ovs.yml + dest: userspace-ovs.yml + when: enable_userspace_ovs_cni + - src: virt-crd.yaml + dest: virt-crd.yaml + when: enable_virt_network_device_plugin + - src: virt-ds.yaml + dest: virt-ds.yaml + when: enable_virt_network_device_plugin - name: Template multus resources template: @@ -39,16 +48,22 @@ - name: Create network namespace set_fact: - use_network_namespace: "network.{{ crd_namespace }}" + use_network_namespace: "network-attachment-definitions.{{ crd_namespace }}" - name: Create base CRD shell: > kubectl create -f {{ ansible_env.HOME }}/multus-resources/multus-crd.yml when: "use_network_namespace not in check_crd.stdout" +#- name: Check to see which network CRD definitions are present +# k8s_raw: +# kind: NetworkAttachmentDefinition +# namespace: kube-system +# register: check_network_crds + - name: Check to see which network CRD definitions are present shell: > - kubectl get network + kubectl get network-attachment-definitions register: check_network_crds - name: Create flannel network CRD @@ -61,9 +76,24 @@ kubectl create -f {{ ansible_env.HOME }}/multus-resources/macvlan.yml when: "'macvlan-conf' not in check_network_crds.stdout" +- name: Create userspace ovs network CRD + shell: > + kubectl create -f {{ ansible_env.HOME }}/multus-resources/userspace-ovs.yml + when: "'userspace-ovs' not in check_network_crds.stdout and enable_userspace_ovs_cni" + +- name: Create virt-net network CRD + shell: > + kubectl create -f {{ ansible_env.HOME }}/multus-resources/virt-crd.yaml + when: "'virt-net' not in check_network_crds.stdout and enable_virt_network_device_plugin" + +#- name: Create virt-net daemonset +# shell: > +# kubectl create -f {{ ansible_env.HOME }}/multus-resources/virt-ds.yaml +# when: enable_virt_network_device_plugin + - name: Check to see which CRDs are present, for validation shell: > - kubectl get network + kubectl get net-attach-def register: verify_network_crd - name: Verify which network CRD definitions are present @@ -98,4 +128,4 @@ with_items: - "{{ groups['nodes'] + groups['master'] }}" when: > - "hostvars[item]['inventory_hostname']" not in output_crb.stdout \ No newline at end of file + "hostvars[item]['inventory_hostname']" not in output_crb.stdout diff --git a/roles/multus-crd/templates/crd.yml.j2 b/roles/multus-crd/templates/crd.yml.j2 index 16b2410..28460ce 100644 --- a/roles/multus-crd/templates/crd.yml.j2 +++ b/roles/multus-crd/templates/crd.yml.j2 @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the form: . - name: networks.{{ crd_namespace }} + name: network-attachment-definitions.{{ crd_namespace }} spec: # group name to use for REST API: /apis// group: {{ crd_namespace }} @@ -12,11 +12,18 @@ spec: scope: Namespaced names: # plural name to be used in the URL: /apis/// - plural: networks + plural: network-attachment-definitions # singular name to be used as an alias on the CLI and for display - singular: network + singular: network-attachment-definition # kind is normally the CamelCased singular type. Your resource manifests use this. - kind: Network + kind: NetworkAttachmentDefinition # shortNames allow shorter string to match your resource on the CLI shortNames: - - net + - net-attach-def + validation: + openAPIV3Schema: + properties: + spec: + properties: + config: + type: string diff --git a/roles/multus-crd/templates/flannel.yml.j2 b/roles/multus-crd/templates/flannel.yml.j2 index 6700396..e607583 100644 --- a/roles/multus-crd/templates/flannel.yml.j2 +++ b/roles/multus-crd/templates/flannel.yml.j2 @@ -1,5 +1,5 @@ -apiVersion: "kubernetes.cni.cncf.io/v1" -kind: Network +apiVersion: "k8s.cni.cncf.io/v1" +kind: NetworkAttachmentDefinition metadata: name: flannel-conf plugin: flannel diff --git a/roles/multus-crd/templates/macvlan.yml.j2 b/roles/multus-crd/templates/macvlan.yml.j2 index f03b4bb..089081f 100644 --- a/roles/multus-crd/templates/macvlan.yml.j2 +++ b/roles/multus-crd/templates/macvlan.yml.j2 @@ -1,5 +1,5 @@ -apiVersion: "kubernetes.cni.cncf.io/v1" -kind: Network +apiVersion: "k8s.cni.cncf.io/v1" +kind: NetworkAttachmentDefinition metadata: name: macvlan-conf spec: diff --git a/roles/multus-crd/templates/userspace-ovs.yml b/roles/multus-crd/templates/userspace-ovs.yml new file mode 100644 index 0000000..f7b53e5 --- /dev/null +++ b/roles/multus-crd/templates/userspace-ovs.yml @@ -0,0 +1,15 @@ +apiVersion: "k8s.cni.cncf.io/v1" +kind: NetworkAttachmentDefinition +metadata: + name: userspace-ovs +spec: + config: '{ + "cniVersion": "0.3.0", + "type": "userspace", + "LogLevel": "debug", + "LogFile": "/var/log/userspace.log", + "host": { + "engine": "ovs-dpdk", + "iftype": "vhostuser" + } + }' diff --git a/roles/multus-crd/templates/virt-crd.yaml b/roles/multus-crd/templates/virt-crd.yaml new file mode 100644 index 0000000..744b9c9 --- /dev/null +++ b/roles/multus-crd/templates/virt-crd.yaml @@ -0,0 +1,20 @@ +apiVersion: "k8s.cni.cncf.io/v1" +kind: NetworkAttachmentDefinition +metadata: + name: virt-net + annotations: + k8s.v1.cni.cncf.io/resourceName: kernel.org/virt +spec: + config: '{ + "type": "ehost-device", + "name": "virt-network", + "cniVersion": "0.3.0", + "ipam": { + "type": "host-local", + "subnet": "10.56.217.0/24", + "routes": [{ + "dst": "0.0.0.0/0" + }], + "gateway": "10.56.217.1" + } +}' diff --git a/roles/multus-crd/templates/virt-ds.yaml b/roles/multus-crd/templates/virt-ds.yaml new file mode 100644 index 0000000..e7a6e4e --- /dev/null +++ b/roles/multus-crd/templates/virt-ds.yaml @@ -0,0 +1,60 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: virt-device-plugin + namespace: kube-system + +--- +apiVersion: extensions/v1beta1 +kind: DaemonSet +metadata: + name: kube-virt-device-plugin-amd64 + namespace: kube-system + labels: + tier: node + app: virtdp +spec: + template: + metadata: + labels: + tier: node + app: virtdp + spec: + hostNetwork: true + hostPID: true + nodeSelector: + beta.kubernetes.io/arch: amd64 + tolerations: + - key: node-role.kubernetes.io/master + operator: Exists + effect: NoSchedule + serviceAccountName: virt-device-plugin + containers: + - name: kube-virtdp + image: nfvpe/virtdp + imagePullPolicy: Never + command: [ '/usr/src/virt-network-device-plugin/bin/virtdp', '-logtostderr', '-v', '10' ] + resources: + requests: + cpu: "100m" + memory: "50Mi" + limits: + cpu: "100m" + memory: "50Mi" + securityContext: + privileged: true + volumeMounts: + - name: devicesock + mountPath: /var/lib/kubelet/device-plugins/ + readOnly: false + - name: net + mountPath: /sys/class/net + readOnly: true + volumes: + - name: devicesock + hostPath: + path: /var/lib/kubelet/device-plugins/ + - name: net + hostPath: + path: /sys/class/net diff --git a/roles/set-proxy/filter_plugins/calculate_no_proxy.py b/roles/set-proxy/filter_plugins/calculate_no_proxy.py new file mode 100755 index 0000000..f95f181 --- /dev/null +++ b/roles/set-proxy/filter_plugins/calculate_no_proxy.py @@ -0,0 +1,71 @@ +# Copyright (c) 2018, Intel Corporation. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are met: +# +# * Redistributions of source code must retain the above copyright notice, +# this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# * Neither the name of Intel Corporation nor the names of its contributors +# may be used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, +# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +from collections import OrderedDict + + +def contextfilter(f): + """Decorator for marking context dependent filters. The current + :class:`Context` will be passed as first argument. + """ + f.contextfilter = True + return f + + +@contextfilter +def do_calculate_no_proxy(context, proxy_env): + # if proxy_env is empty, abort + if not proxy_env: + return proxy_env + no_proxy = [] + try: + node_info = context['node_info'] + except KeyError: + pass + else: + node_set = set(node_info) + all_hosts = set(context['groups']['all']) + current_hosts = all_hosts.intersection(node_set) + no_proxy.extend(node_info[host]['networks']['mgmt']['ip_address'] for host in current_hosts) + try: + no_proxy.append(context['kolla_internal_vip_address']) + except KeyError: + pass + try: + if context['kolla_internal_vip_address'] != context['kolla_external_vip_address']: + no_proxy.append(context['kolla_external_vip_address']) + except KeyError: + pass + if no_proxy: + no_proxy_dict = OrderedDict.fromkeys(proxy_env['no_proxy'].split(',')) + no_proxy_dict.update((k, '') for k in no_proxy) + proxy_env['no_proxy'] = ','.join(no_proxy_dict) + return proxy_env + + +class FilterModule(object): + def filters(self): + return { + 'calculate_no_proxy': do_calculate_no_proxy, + } diff --git a/roles/set-proxy/tasks/main.yml b/roles/set-proxy/tasks/main.yml new file mode 100644 index 0000000..4fd7f58 --- /dev/null +++ b/roles/set-proxy/tasks/main.yml @@ -0,0 +1,37 @@ +# Copyright (c) 2018, Intel Corporation. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are met: +# +# * Redistributions of source code must retain the above copyright notice, +# this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# * Neither the name of Intel Corporation nor the names of its contributors +# may be used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, +# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +--- + - name: set /etc/environment proxy settings + lineinfile: + dest: /etc/environment + regexp: '^{{ item.key }}' + line: '{{ item.key }}={{ item.value }}' + state: present + create: yes + owner: root + group: root + mode: 0644 + with_dict: "{{ calc_proxy_env }}" + when: '"http_proxy" in calc_proxy_env or "https_proxy" in calc_proxy_env' diff --git a/roles/tmate/files/tmate.sh b/roles/tmate/files/tmate.sh new file mode 100644 index 0000000..ea0da34 --- /dev/null +++ b/roles/tmate/files/tmate.sh @@ -0,0 +1,20 @@ +#!/bin/bash + +# Flush out existing keys & create new +rm -f /home/centos/.ssh/id_rsa* +ssh-keygen -b 2048 -t rsa -f /home/centos/.ssh/id_rsa -q -N "" + +# Kill tmate processes if they exist. +if foo=$(pgrep "tmate -S"); then + pgrep "tmate -S" | xargs kill -9 +fi +rm -Rf /tmp/tmate* + +# Spin up two tmate sessions (one for a backup) +tmate -S /tmp/tmate.sock new-session -d +tmate -S /tmp/tmate.sock wait tmate-ready +tmate -S /tmp/tmate.sock display -p "#{tmate_ssh}" + +tmate -S /tmp/tmate2.sock new-session -d +tmate -S /tmp/tmate2.sock wait tmate-ready +tmate -S /tmp/tmate2.sock display -p "#{tmate_ssh}" diff --git a/roles/tmate/tasks/main.yml b/roles/tmate/tasks/main.yml new file mode 100644 index 0000000..af4db08 --- /dev/null +++ b/roles/tmate/tasks/main.yml @@ -0,0 +1,15 @@ +--- + +- name: Copy up tmate shell script + copy: + src: tmate.sh + dest: /home/centos/.tmate.sh + mode: 0755 + owner: centos + group: centos + +- name: Clone container-experience-kits-demo-area locally + git: + repo: https://github.com/intel/container-experience-kits-demo-area + dest: "/home/centos/container-experience-kits-demo-area" + force: yes diff --git a/roles/userspace-cni/meta/main.yml b/roles/userspace-cni/meta/main.yml new file mode 100644 index 0000000..eda416c --- /dev/null +++ b/roles/userspace-cni/meta/main.yml @@ -0,0 +1,3 @@ +--- +dependencies: + - { role: install-go, golang_gopath: '/root/go' } diff --git a/roles/userspace-cni/tasks/main.yml b/roles/userspace-cni/tasks/main.yml new file mode 100644 index 0000000..5836c92 --- /dev/null +++ b/roles/userspace-cni/tasks/main.yml @@ -0,0 +1,150 @@ +--- + +- name: "Add hugepages to sysctl.conf" + lineinfile: + path: /etc/sysctl.conf + regexp: 'nr_hugepages' + line: vm.nr_hugepages = 1024 + register: set_hugepage_sysctl + +# You can verify with `cat /proc/meminfo | grep Huge` + +- name: Reload sysctl + shell: > + sysctl -p + when: set_hugepage_sysctl.changed + +# VPP installation +# sudo yum install centos-release-fdio +# sudo yum install vpp* + +- name: Install jq binary + get_url: + url: https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64 + dest: /usr/bin/jq + +- name: set jq binary permission + file: + path: /usr/bin/jq + mode: 0755 + +# Moving from CentOS NFV SIG repo to Nexus repo. + +# cat /etc/yum.repos.d/fdio-release.repo +# [fdio-release] +# name=fd.io release branch latest merge +# baseurl=https://nexus.fd.io/content/repositories/fd.io.centos7/ +# enabled=1 +# gpgcheck=0 + +# sudo yum install vpp-18.07-release.x86_64 \ +# vpp-lib-18.07-release.x86_64 \ +# vpp-plugins-18.07-release.x86_64 \ +# vpp-devel-18.07-release.x86_64 \ +# vpp-api-python-18.07-release.x86_64 \ +# vpp-api-lua-18.07-release.x86_64 \ +# vpp-api-java-18.07-release.x86_64 \ +# vpp-selinux-policy-18.07-release.x86_64 + +# Dockerhub has been update with (both tags point to same image): +# bmcfall/vpp-centos-userspace-cni:0.3.0 +# bmcfall/vpp-centos-userspace-cni:latest + + +# FYI: On my server, I am almost out of hugepages, so you are at your limit. If you need more VMs, may need to reduce the number of hugepages given to each VM (like from 4K to 2K). +# $ cat /proc/meminfo | grep -i huge +# AnonHugePages: 79872 kB +# HugePages_Total: 32768 +# HugePages_Free: 3991 +# : + +# Billy + +- name: Add multiple repositories into the same file (1/2) + yum_repository: + name: fdio-release + description: "fd.io repo" + baseurl: https://nexus.fd.io/content/repositories/fd.io.centos7/ + gpgcheck: no + +- name: Install VPP repo + yum: + name: "centos-release-fdio" + state: latest + +# How do I install the proper VPP version? +- name: Install VPP + yum: + name: "{{ item }}" + state: present + with_items: + - vpp-18.07-release + - vpp-lib-18.07-release + - vpp-plugins-18.07-release + - vpp-devel-18.07-release + - vpp-api-python-18.07-release + - vpp-api-lua-18.07-release + - vpp-api-java-18.07-release + - vpp-selinux-policy-18.07-release + + +- name: Start & Enable VPP + service: + name: "vpp" + state: started + enabled: yes + +- name: Ensure git is installed + yum: + name: git + state: present + +# CHANGE THIS URL and paths. +# ...and in the docs. +- name: Clone Userspace CNI Plugin repo + git: + repo: https://github.com/intel/userspace-cni-network-plugin.git + dest: "{{ ansible_env.HOME }}/go/src/github.com/intel/userspace-cni-network-plugin" + force: true + +- name: Build Userspace CNI plugin + shell: > + source /etc/profile.d/golang.sh; make + args: + chdir: "{{ ansible_env.HOME }}/go/src/github.com/intel/userspace-cni-network-plugin" + creates: "{{ ansible_env.HOME }}/go/src/github.com/intel/userspace-cni-network-plugin/userspace/userspace" + +- name: Copy Userspace CNI plugin to CNI bin dir + shell: > + cp userspace/userspace /opt/cni/bin/userspace + args: + chdir: "{{ ansible_env.HOME }}/go/src/github.com/intel/userspace-cni-network-plugin" + creates: "/opt/cni/bin/userspace" + +- name: Clone CNI repo + git: + repo: https://github.com/containernetworking/cni.git + dest: "{{ ansible_env.HOME }}/go/src/github.com/containernetworking/cni" + +- name: Make an alternate cni net.d dir + file: + path: /etc/alternate.net.d/ + state: directory + +- name: Template userspace CNI config + template: + src: 90-userspace.conf.j2 + dest: /etc/alternate.net.d/90-userspace.conf + +# manually... to run the demo. +# $ docker pull bmcfall/vpp-centos-userspace-cni:0.2.0 +# $ cp userspace/userspace /opt/cni/bin/ +# $ sed -i -e 's|vpp-centos-userspace-cni|bmcfall/vpp-centos-userspace-cni:0.2.0|' scripts/vpp-docker-run.sh +# $ export CNI_PATH=/opt/cni/bin; export NETCONFPATH=/etc/alternate.net.d/; export GOPATH=/root/src/go/; ./scripts/vpp-docker-run.sh -it --privileged docker.io/bmcfall/vpp-centos-userspace-cni:0.2.0 +# ------ in container +# vppctl show interface +# vppctl show mode +# vppctl show memif +# create two of 'em and do: +# vppctl ping 192.168.210.2 + diff --git a/roles/userspace-cni/templates/90-userspace.conf.j2 b/roles/userspace-cni/templates/90-userspace.conf.j2 new file mode 100644 index 0000000..6469e18 --- /dev/null +++ b/roles/userspace-cni/templates/90-userspace.conf.j2 @@ -0,0 +1,34 @@ +{ + "cniVersion": "0.3.1", + "type": "userspace", + "name": "memif-network", + "if0name": "net0", + "host": { + "engine": "vpp", + "iftype": "memif", + "netType": "bridge", + "memif": { + "role": "master", + "mode": "ethernet" + }, + "bridge": { + "bridgeId": 4 + } + }, + "container": { + "engine": "vpp", + "iftype": "memif", + "netType": "interface", + "memif": { + "role": "slave", + "mode": "ethernet" + } + }, + "ipam": { + "type": "host-local", + "subnet": "192.168.210.0/24", + "routes": [ + { "dst": "0.0.0.0/0" } + ] + } +} \ No newline at end of file diff --git a/roles/userspace-ovs-cni/tasks/main.yml b/roles/userspace-ovs-cni/tasks/main.yml new file mode 100644 index 0000000..68538e4 --- /dev/null +++ b/roles/userspace-ovs-cni/tasks/main.yml @@ -0,0 +1,20 @@ +--- +- name: Clone Userspace CNI Plugin repo + git: + repo: https://github.com/intel/userspace-cni-network-plugin.git + dest: "{{ ansible_env.HOME }}/go/src/github.com/intel/userspace-cni-network-plugin" + force: true + +- name: Build Userspace CNI plugin + shell: > + source /etc/profile.d/golang.sh; make install-dep; make install; make + args: + chdir: "{{ ansible_env.HOME }}/go/src/github.com/intel/userspace-cni-network-plugin" + creates: "{{ ansible_env.HOME }}/go/src/github.com/intel/userspace-cni-network-plugin/userspace/userspace" + +- name: Copy Userspace CNI plugin to CNI bin dir + shell: > + cp userspace/userspace /opt/cni/bin/userspace + args: + chdir: "{{ ansible_env.HOME }}/go/src/github.com/intel/userspace-cni-network-plugin" + creates: "/opt/cni/bin/userspace" diff --git a/roles/virt-network-device-plugin/tasks/main.yaml b/roles/virt-network-device-plugin/tasks/main.yaml new file mode 100644 index 0000000..f3d93ad --- /dev/null +++ b/roles/virt-network-device-plugin/tasks/main.yaml @@ -0,0 +1,61 @@ +--- + +- name: Ensure git is installed + yum: + name: git + state: present + +- name: Clone repo + git: + repo: https://github.com/zshi-redhat/virt-network-device-plugin.git + dest: /usr/src/virt-network-device-plugin + force: true + +- name: Build it + shell: > + source /etc/profile.d/golang.sh;./build.sh + args: + chdir: /usr/src/virt-network-device-plugin + +# - name: Copy bin... to where? +# shell: > +# cp ./bin/virtdp /where/to? +# args: +# chdir: "{{ ansible_env.HOME }}/src/go/src/github.com/zshi-redhat/virt-network-device-plugin" +# creates: "{{ ansible_env.HOME }}/src/go/src/github.com/zshi-redhat/virt-network-device-plugin/bin/virtdp" + +#- name: Build docker image +# shell: > +# ./build_docker.sh +# args: +# chdir: /usr/src/virt-network-device-plugin + + +# - name: Build Userspace CNI plugin +# shell: > +# export GOPATH={{ ansible_env.HOME }}/src/go; make +# args: +# chdir: "{{ ansible_env.HOME }}/src/go/src/github.com/Billy99/user-space-net-plugin" +# creates: "{{ ansible_env.HOME }}/src/go/src/github.com/Billy99/user-space-net-plugin/userspace/userspace" + +# - name: Copy Userspace CNI plugin to CNI bin dir +# shell: > +# cp userspace/userspace /opt/cni/bin/userspace +# args: +# chdir: "{{ ansible_env.HOME }}/src/go/src/github.com/Billy99/user-space-net-plugin" +# creates: "/opt/cni/bin/userspace" + +# - name: Clone CNI repo +# git: +# repo: https://github.com/containernetworking/cni.git +# dest: "{{ ansible_env.HOME }}/src/go/src/github.com/containernetworking/cni" + +# - name: Make an alternate cni net.d dir +# file: +# path: /etc/alternate.net.d/ +# state: directory + +# - name: Template userspace CNI config +# template: +# src: 90-userspace.conf.j2 +# dest: /etc/alternate.net.d/90-userspace.conf