diff --git a/README.md b/README.md index b8fcb0d..a33e511 100644 --- a/README.md +++ b/README.md @@ -1,53 +1,141 @@ [![Build Status](https://circleci.com/gh/cloudify-examples/simple-kubernetes-blueprint.svg?style=shield&circle-token=:circle-token)](https://circleci.com/gh/cloudify-examples/simple-kubernetes-blueprint) -## Simple Kubernetes Example Blueprint +## Simple Kubernetes Blueprint -The blueprints in this project provide orchestration for starting, healing, and scaling a [Kubernetes](https://kubenretes.io/) cluster on Openstack. There are 2 blueprints, with slightly different use cases: -* **openstack-blueprint.yaml** : an Openstack bluieprint that orchestrates setup, teardown, autohealing, and autoscaling of the cluster -* **service-blueprint** : an example blueprint that uses the [Kubernetes plugin](https://github.com/cloudify-examples/cloudify-kubernetes-plugin) to install a simple Nginx service on the Kubernetes cluster. +This blueprint deploys a simple Kubernetes cluster. -### Prerequisites +## prerequisites -These blueprints have only been tested against an Ubuntu 14.04 image with 2GB of RAM. The image used must be pre-installed with Docker 1.11. Any image used should have passwordless ssh, and passwordless sudo with `requiretty` false or commented out in sudoers. Also required is an Openstack cloud environment. The blueprints were tested on Openstack Kilo. +You will need a *Cloudify Manager* running in either AWS, Azure, or Openstack. -### Cloudify Version +If you have not already, set up the [example Cloudify environment](https://github.com/cloudify-examples/cloudify-environment-setup). Installing that blueprint and following all of the configuration instructions will ensure you have all of the prerequisites, including keys, plugins, and secrets. -These blueprints were tested on Cloudify 3.4.0 and on Cloudify 4.0. -### Operation +### Step 1: Install the demo application -Cloudify 4.0: -* Run `cfy install [path-to-blueprint-file] -i [path-to-inputs-file]` +In this step, you will first gather two pieces of information from your Cloud account: the parameters of a Centos 7.0 image and a medium sized image. This info is already provided for AWS us-east-1 and Azure us-east. -#### openstack-blueprint.yaml instructions +Next you provide those inputs to the blueprint and execute install: -* Start a Cloudify 3.4.0 [manager](http://docs.getcloudify.org/3.4.0/manager/bootstrapping/). -* Edit the `inputs.yaml` file to add image, flavor, and user name (probably ubuntu). -* run `cfy blueprints upload -b kubernetes -p kubernetes-openstack-blueprint.yaml` -* run `cfy deployments create -b kubernetes -d kubernetes -i input/openstack.yaml` -* run `cfy executions start -d kubernetes -w install` +#### For AWS run: -This will create the Kubernetes cluster, including the Kubernetes dashboard. The Kubernetes dashboard URL is displayed by running `cfy deployments outputs -d kubernetes`. - -To see autohealing in action, go to the Openstack Horizon dashboard and terminate the worker. Then go to the Cloudify UI deployments tab. See the `heal` workflow begin and restore the missing node. - -To see autoscaling in action: -* ssh to the Cloudify manager: `cfy ssh` -* ssh to a kubernetes worker node: `sudo ssh -i /root/.ssh/agent_key.pem ubuntu@` -* run `sudo apt-get install -y stress` -* run `stress -c 2 -t 10` -* Then go to the Cloudify UI deployments tab. See the `scale` workflow begin and grow the cluster. - -In a few minutes, the cluster will scale down to it's original size (one worker) due to the scale down policy in the blueprint. - -To tear down the cluster, run `cfy executions start -d kubernetes -w uninstall` - -#### service-blueprint.yaml instructions - -* With the Kubernetes cluster started as describe above (deployment must be named `kubernetes for this example`), run `cfy blueprints upload -b service -p service-blueprint.yaml`. -* run `cfy deployments create -b service -d service` -* run `cfy executions start -d service -w install` - -This will install an Nginx service and the Nginx containers on the Kubernetes environment. This will be visible via the Kubernetes dashboard as describe above. - -To uninstall the service and containers, run `cfy executions start -d service -w uninstall` +```shell +$ cfy install \ + https://github.com/cloudify-examples/simple-kubernetes-blueprint/archive/4.0.1.zip \ + -b demo \ + -n aws-blueprint.yaml +``` + + +#### For Azure run: + +```shell +$ cfy install \ + https://github.com/cloudify-examples/simple-kubernetes-blueprint/archive/4.0.1.zip \ + -b demo \ + -n azure-blueprint.yaml +``` + + +#### For Openstack run: + +```shell +$ cfy install \ + https://github.com/cloudify-examples/simple-kubernetes-blueprint/archive/4.0.1.zip \ + -b demo \ + -n openstack-blueprint.yaml -i flavor=[MEDIUM_SIZED_FLAVOR] -i image=[CENTOS_7_IMAGE_ID] +``` + + +You should see something like this when you execute the command: + +```shell +$ cfy install \ + https://github.com/cloudify-examples/simple-kubernetes-blueprint/archive/4.0.1.zip \ + -b demo \ + -n aws-blueprint.yaml +Uploading blueprint simple-kubernetes-blueprint/aws-blueprint.yaml... + aws-blueprint.yaml |##################################################| 100.0% +Blueprint uploaded. The blueprint's id is aws +Creating new deployment from blueprint aws... +Deployment created. The deployment's id is aws +Executing workflow install on deployment aws [timeout=900 seconds] +Deployment environment creation is in progress... +2017-05-30 11:35:20.609 CFY Starting 'create_deployment_environment' workflow execution +2017-05-30 11:35:20.941 CFY Installing deployment plugins +2017-05-30 11:35:21.028 CFY [,] Sending task 'cloudify_agent.operations.install_plugins' +2017-05-30 11:35:21.067 CFY [,] Task started 'cloudify_agent.operations.install_plugins' +2017-05-30 11:35:21.094 LOG [,] INFO: Installing plugin: aws +2017-05-30 11:35:21.688 LOG [,] INFO: Using existing installation of managed plugin: 444f7f27-6508-45fe-8d18-a0b2da729538 [package_name: cloudify-aws-plugin, package_version: 1.4.9, supported_platform: linux_x86_64, distribution: centos, distribution_release: core] +2017-05-30 11:35:21.713 CFY [,] Task succeeded 'cloudify_agent.operations.install_plugins' +2017-05-30 11:35:21.866 CFY Starting deployment policy engine core +2017-05-30 11:35:22.053 CFY [,] Sending task 'riemann_controller.tasks.create' +2017-05-30 11:35:22.069 CFY [,] Task started 'riemann_controller.tasks.create' +2017-05-30 11:35:23.093 CFY [,] Task succeeded 'riemann_controller.tasks.create' +2017-05-30 11:35:23.344 CFY Creating deployment work directory +2017-05-30 11:35:23.670 CFY 'create_deployment_environment' workflow execution succeeded +2017-05-30 11:35:26.137 CFY Starting 'install' workflow execution +``` + + +### Step 2: Verify the demo installed and started. + +Once the workflow execution is complete, get your configuration file contents from your Kubernetes master: + + +```shell +$ cfy node-instances list +Listing all instances... + +Node-instances: ++-----------------------------------+---------------------------------------+-------------------------------+----------------------------+---------------+------------+----------------+------------+ +| id | deployment_id | host_id | node_id | state | permission | tenant_name | created_by | ++-----------------------------------+---------------------------------------+-------------------------------+----------------------------+---------------+------------+----------------+------------+ +| cloudify_host_cloud_config_ff84al | simple-kubernetes-blueprint | | cloudify_host_cloud_config | started | creator | default_tenant | admin | +| kubernetes_master_rzob7x | simple-kubernetes-blueprint | kubernetes_master_host_5puozx | kubernetes_master | started | creator | default_tenant | admin | +| kubernetes_master_host_5puozx | simple-kubernetes-blueprint | kubernetes_master_host_5puozx | kubernetes_master_host | started | creator | default_tenant | admin | +| kubernetes_master_ip_zn18sp | simple-kubernetes-blueprint | | kubernetes_master_ip | started | creator | default_tenant | admin | +| kubernetes_node_sq215s | simple-kubernetes-blueprint | kubernetes_node_host_j4zbdi | kubernetes_node | started | creator | default_tenant | admin | +| kubernetes_node_host_j4zbdi | simple-kubernetes-blueprint | kubernetes_node_host_j4zbdi | kubernetes_node_host | started | creator | default_tenant | admin | +| kubernetes_security_group_qmlgu1 | simple-kubernetes-blueprint | | kubernetes_security_group | started | creator | default_tenant | admin | +| private_subnet_wms6tb | simple-kubernetes-blueprint | | private_subnet | started | creator | default_tenant | admin | +| public_subnet_nfl134 | simple-kubernetes-blueprint | | public_subnet | started | creator | default_tenant | admin | +| ssh_group_ov2gy2 | simple-kubernetes-blueprint | | ssh_group | started | creator | default_tenant | admin | +| vpc_wwpkx7 | simple-kubernetes-blueprint | | vpc | started | creator | default_tenant | admin | ++-----------------------------------+---------------+-------------------------------+----------------------------+---------------+------------+----------------+------------+ + + +$ cfy node-i get kubernetes_master_rzob7x +Retrieving node instance kubernetes_master_rzob7x + +Node-instance: ++--------------------------+---------------------------------------+-------------------------------+-------------------+---------+------------+----------------+------------+ +| id | deployment_id | host_id | node_id | state | permission | tenant_name | created_by | ++--------------------------+---------------------------------------+-------------------------------+-------------------+---------+------------+----------------+------------+ +| kubernetes_master_rzob7x | simple-kubernetes-blueprint | kubernetes_master_host_5puozx | kubernetes_master | started | creator | default_tenant | admin | ++--------------------------+---------------------------------------+-------------------------------+-------------------+---------+------------+----------------+------------+ + +Instance runtime properties: + join_command: kubeadm join --token 163f7e.2be3d0fcf46a7f5d 10.10.0.153:6443 + configuration_file_content: apiVersion: v1 +clusters: +- cluster: + certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01EVXpNREV4TXpreE9Gb1hEVEkzTURVeU9ERXhNemt4T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT1d6Cjg0WUQ5UjlMdURLc1NTeVlHTjFZQUNpUE1XSVgvTVJoYzN4emdIbzhQbGZmSitZQ0xGUjhqSy9oZUhtdnV6NXkKcVI0bWpuakpyWFJrc3A4VE1QckhQODB4dXVuWGg2K0dad05WR3pOckpZUVBKcFlXVmo4NkE4MDQzZ1NmNStrVgp5dnFhYVJwd1JZVEMxYkhQOTE0MXZITG9OTUNtaWdheXhmemJJOVFETjFwN2FpMmNFbEp3WmN0S3luK2ltd3UvCkJXbm5WK1NOWEYycXU5cnhpVGtEcWdJOVlXcUFjRFNFcHhmY0RuR0VkVjdFNWxEWDRtaks0Q0Exbk10dE8rUWcKQ3dQWEJmSW52RjVRU2c1dzIxb2tzU0k5Yk9GdWRxeWhOVUlUck1VdURaV0Z2MXZTT1JmU1ZuS2I5YzZMaTBNZwpuNHRzd3FTaFRrajlwS3JhZ1hNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFLRTRxdWxBbWdUcWFZWFJNSzZDTmhveGpTYXAKeVZlNFRCd0VzN1IzK214ZWxieEsxYjNJTEVyU1lmYkFGVFNWQTlJbEhNTnp5aStKZGNMVm1vUFFVQjZKN2hrbQpkYTBvSWM2Q1prWElCZk9Ccm1lT3JrWFlxYUdvYWNpV0xzcnV4MElIdnFTbWRhZ3JCeWR6M3dqOU0xR0J4MGVGClBIUllpTDY2TEpzVTk4aVNrTzBEeW1maEdadnRHRTgvY0lpRlk4YmYyWDNwb2dBWlJLTlhTb3BxWGx2SklsdU8KR0FWQlhHMTdGNDRpbjRYWGpVTUpVUjQwVUZoWjBPcWt6Z2NRay9yWDN4TUZhK1BmSXhZK2dTVHN3UjBUcDRmdwoyUVpqdWNzdk5XMHFSV1BqTDA5WHdPZUdWbnpGYVBvRHZLOGVkMGVGMUFIdnNZQTlhMWNSdGlSc3VLcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= + server: https://10.10.0.153:6443 + name: kubernetes +contexts: +- context: + cluster: kubernetes + user: kubernetes-admin + name: kubernetes-admin@kubernetes +current-context: kubernetes-admin@kubernetes +kind: Config +preferences: {} +users: +- name: kubernetes-admin + user: + client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJUkVZL3VnNnJZQXN3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB4TnpBMU16QXhNVE01TVRoYUZ3MHhPREExTXpBeE1UTTVNakJhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTVENFBBdXZLYjBER0l3QjAKd1Z4US9VUHorR1N4U2V1L2E0MDY4M0ZuV0JiSEtWWGNLMnFpcGMrZTFaKzM5OWhFTHpaVnh2OFI2RHRLUHA1VApHdkR2aTRneUdnckJpQWZGdFlUc1JuT0JFTnZPMEVMdUhXV09XRHFZeldIYk1sTFRINDZ0VzMwYUsvRFRzcC9JClA2TUNwSWpYd3luQkV4NjVXL2hzUlFiNUlRZ3BmQ25TMmYrQnZqd1dDUkNPOEU3YUpxMXB6TlBIWHdQVDgzQncKcklSS0ZxbUdXeFYvOGVCd2RXODN3Mm0xcHREUWxCdVZiVUNvMGF4R0lPQXVpOFNPbHJ2aGFkL2J3NUZxRWJGTQovVDZOcVduc1ZPaWlKZU56RjZrUkpiUHppc0FuWVpxNUl1eG5HaWdLYnFNY2xJdjk3NUNGQmhISTRGUG1aT1FqCkRhVUk5UUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFLR2g3RnNMM1BhOTBORVhRRFlTd0EwQTNPTnlKcWEvb0g5Ygp6R2JIMDB3UkVLUis3UVVpNkdQbUJIdG1GSWRReXR3cWpMcENjYm9rS0IyTkYvRUF3VnZPN3VubFZ6Tmk0QjBBCmpSR3c0QWswSTVEc0Z0UU0yaUo2SmpRTzRGYmlxcldTZkNXMU9DaEViei9RbmdMQ0pRN1FteHhxcjNsWVVqeDYKTXBKRmd6OVNmVGVFNUNpQjVhT3QvU0pWSVJYU3hGNWtVc3c0K1FjcWRHeWFRa2hRRERERUZyZEplcWczRkFFcwpmbmR5RmNOOExnYURJcWFDSUp0MFYzSWFNbUFvMS9XVElrVHVJQmxOZzdJZG1wUTl4dGwvSjJLY3pGR1FKMFZWCnVxbG40ajJvOWk2b1o3ZmExTThwUUFlOWpicGdNRW9lNld3ckpDWkxUSVRCRjF1MVp1Zz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= + client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBNUQ0UEF1dktiMERHSXdCMHdWeFEvVVB6K0dTeFNldS9hNDA2ODNGbldCYkhLVlhjCksycWlwYytlMVorMzk5aEVMelpWeHY4UjZEdEtQcDVUR3ZEdmk0Z3lHZ3JCaUFmRnRZVHNSbk9CRU52TzBFTHUKSFdXT1dEcVl6V0hiTWxMVEg0NnRXMzBhSy9EVHNwL0lQNk1DcElqWHd5bkJFeDY1Vy9oc1JRYjVJUWdwZkNuUwoyZitCdmp3V0NSQ084RTdhSnExcHpOUEhYd1BUODNCd3JJUktGcW1HV3hWLzhlQndkVzgzdzJtMXB0RFFsQnVWCmJVQ28wYXhHSU9BdWk4U09scnZoYWQvYnc1RnFFYkZNL1Q2TnFXbnNWT2lpSmVOekY2a1JKYlB6aXNBbllacTUKSXV4bkdpZ0ticU1jbEl2OTc1Q0ZCaEhJNEZQbVpPUWpEYVVJOVFJREFRQUJBb0lCQVFDeCtqSjZkS05HWFp3agpieGVjTUFCM2ZhV2c2K1BUWUtIRG5EMTcxOUplUG1UUE5zU1lsbTUrSFlnZHpJNElGZndWVktsT28xZXpYNGhsCmk5QUNFaDY1RDFzQ002RDJFaGw1a2swc0lxVmlJQVVGSVN2TWdJU2ZDQkpmRlE5NERsM1RIYzdRcUp6ZjVzc3QKWHFzbjlGVDdPRG9IVldmWklQd3BXMjRSNVg0ZTRtVGp5SmJoTm84NUhhZGxZMHoyTVAxRTdvaThNS1BvRWMxdwpXL0tZcHQyNTdIZzJ1TSsrOE9aaG1sYkRzOHptWTlScEtKRVpBMXpscGFZVDdvVE0xK0RjYU5xTTNqelczbGJ5CjdmNjhxQ3lqWU1KWUFrNy8rTUtTcEk0Mk5OVXh5SXo4cWw0K0tGSjJyNDE3U0lsc2paT2wrcmczZG13N2FKbkUKNFY3dmV3dUJBb0dCQVAra3g3TFBrbE02N3NkdXArTzNkZ3hnZlRla2xjOVVuVHE3NXBsQ2J2bll1YkFQeDZwSgo4U2I2V3FBSDFHZFdGazdWVzhDekVkMUlDNC9aSVprbXRjSVdZUm43d2FJcHk5a3J3UFRJeFk4UHFTSTVMVnd0ClYzcGcrL0lOaGdFU0hDSk8vaXo0L0Z5OWY0UmRpa3ZKOHFJeE9IUUx3bUJJWjAza3hTeXBUS1B6QW9HQkFPU1AKZ0VuVWY0aXUvK0VhUm03U1E4bC9IYkZoMHJvemZVRGhhUGNNK3p1UEtINGM1Rk1OWnRnM1JRcXRsWXNaTGtVVwpJTmR4eU9UNjhwRWZaK1c2ZzcvbGRLSDRwMlRHcmhmSTVsRkNPVThEZS8zcjhFdzJFRFR0OGJuUGpFNmNaaVpqCm0wcGwyM0JSYlZvWEUvdWxkRlB6ZmVnQ2tvNVZ4cWlma0crbnM2RjNBb0dCQVBPdUdJVURnMUUremJqZ2E3eU8KZGtJWi80SDRxcXgwMVdMVkZWeGxqTzh2Zk9Dc1NnQ3lkdUpXcGVnQlRxQXAyUjNRRnFPNmpYN0dXKzhFWkJoZQpZOGJjR2pid1dZVEFIb1dtUlVtUHozRXMxbVcrNXRRRWpHd2s0a082VEUvYytXQml0N29hcEVPcWhsQ2Y4V0dJCjRIVm1RWStzWGQzMVpqTkRyQWVFWVgrdEFvR0FYaTRYZ2RTelBLSkh3L3pzdXV1ZmpSNzVJRWViNnFnZTI2WkcKZDA1OUU1eTQ1Y2FIK3dVUnROU0plWTN2aWlLMUl6aXNEYnJRT2pLQjAzVHFmZ291RWR1K0JLUU9iZ05FWjM2YwpFUzNGcVo1WThGZlJhOFgzUmFncXJCTXUwSkczc2VmbmJHK3VUWWp3RTJoaERwZXQ2STN6K3E5Y3JwUC95U24rCi9WTlFQSjhDZ1lBNXh2UnN3eEYwU016M2c3NHdvSEV0N2dwcmRLc1RMaStIV0NFRTlPVm4vUzVEc0hJc3hFZ2IKR1lSQkhvNTFldnY4Vm1xV1BISkU3emhzZ055TnphN1lnNGlUZVl5U04zTnM3VU5nRjN4bmx6ZThLdE5pVU1xZwo4S3dGekVTSHVKQzhJZWpDWlJ4OUhQK0w0cmViaTlqY2NzTHBQUUpucEQrcittQUVzc3o2Vnc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= +``` + +Take that content and store it somewhere. Then you can run the [example NGINX application](https://github.com/cloudify-incubator/cloudify-kubernetes-plugin/blob/master/examples/simple-example-blueprint.yaml). diff --git a/aws-blueprint.yaml b/aws-blueprint.yaml index 8f8cb4d..e803494 100644 --- a/aws-blueprint.yaml +++ b/aws-blueprint.yaml @@ -2,71 +2,45 @@ tosca_definitions_version: cloudify_dsl_1_3 description: > This blueprint creates a Kubernetes Cluster. - It includes a master and two or more nodes with auto-scaling and auto-healing of the nodes. - It is based on the Kubernetes Portable Multi-Node Cluster guide in the Kubernetes documentation website. - https://kubernetes.io/docs/getting-started-guides/docker-multinode/ + It is based on this documentation: https://kubernetes.io/docs/getting-started-guides/kubeadm/ imports: - - http://www.getcloudify.org/spec/cloudify/4.0/types.yaml - - http://getcloudify.org.s3.amazonaws.com/spec/aws-plugin/1.4.4/plugin.yaml - - http://www.getcloudify.org/spec/fabric-plugin/1.3.1/plugin.yaml + - http://www.getcloudify.org/spec/cloudify/4.0.1/types.yaml + - http://getcloudify.org.s3.amazonaws.com/spec/aws-plugin/1.4.9/plugin.yaml - http://www.getcloudify.org/spec/diamond-plugin/1.3.5/plugin.yaml - types/scale.yaml - - imports/kubernetes-blueprint.yaml + - types/cloud_config/cloud-config.yaml + - types/kubernetes.yaml + - imports/kubernetes.yaml + - imports/cloud-config.yaml inputs: - key_name: - default: kubernetes-blueprint-key - - private_key_path: - default: ~/.ssh/kubernetes-blueprint-key.pem - - vpc_id: - type: string - - vpc_cidr_block: - type: string - - public_subnet_id: - type: string - - public_subnet_cidr: - type: string - - private_subnet_id: - type: string - - private_subnet_cidr: - type: string - - ec2_region_name: - default: us-east-1 - - ec2_region_endpoint: - default: ec2.us-east-1.amazonaws.com - - availability_zone: - default: us-east-1e - ami: description: > - Amazon Ubuntu 14.04 AMI + An AWS AMI. Tested with a Centos 7.0 image. + default: ami-ae7bfdb8 instance_type: description: > - Agent VM Instance Type + The AWS instance_type. Tested with m3.medium, although that is unnecessarily large. + default: t2.small agent_user: - default: ubuntu + description: > + The username of the agent running on the instance created from the image. + default: centos + + encode_cloud_config: + default: false dsl_definitions: aws_config: &aws_config aws_access_key_id: { get_secret: aws_access_key_id } aws_secret_access_key: { get_secret: aws_secret_access_key } - ec2_region_name: { get_input: ec2_region_name } - ec2_region_endpoint: { get_input: ec2_region_endpoint } + ec2_region_name: { get_secret: ec2_region_name } + ec2_region_endpoint: { get_secret: ec2_region_endpoint } node_templates: @@ -75,22 +49,21 @@ node_templates: properties: agent_config: install_method: remote - port: 22 user: { get_input: agent_user } - key: { get_property: [ key, private_key_path ] } - min_workers: 2 + port: 22 + key: { get_secret: agent_key_private } aws_config: *aws_config image_id: { get_input: ami } instance_type: { get_input: instance_type } - parameters: - user_data: | - #!/bin/bash - sudo groupadd docker - sudo gpasswd -a ubuntu docker - placement: { get_property: [ public_subnet, availability_zone ] } + interfaces: + cloudify.interfaces.lifecycle: + create: + implementation: aws.cloudify_aws.ec2.instance.create + inputs: + args: + placement: { get_secret: availability_zone } + user_data: { get_attribute: [ cloudify_host_cloud_config, cloud_config ] } relationships: - - type: cloudify.aws.relationships.instance_connected_to_keypair - target: key - type: cloudify.aws.relationships.instance_connected_to_subnet target: public_subnet - type: cloudify.aws.relationships.instance_connected_to_security_group @@ -105,22 +78,13 @@ node_templates: properties: agent_config: install_method: remote - port: 22 user: { get_input: agent_user } - key: { get_property: [ key, private_key_path ] } - min_workers: 2 + port: 22 + key: { get_secret: agent_key_private } aws_config: *aws_config image_id: { get_input: ami } instance_type: { get_input: instance_type } - parameters: - user_data: | - #!/bin/bash - sudo groupadd docker - sudo gpasswd -a ubuntu docker - placement: { get_property: [ private_subnet, availability_zone ] } relationships: - - type: cloudify.aws.relationships.instance_connected_to_keypair - target: key - type: cloudify.aws.relationships.instance_connected_to_subnet target: private_subnet - type: cloudify.aws.relationships.instance_connected_to_security_group @@ -128,6 +92,13 @@ node_templates: - type: cloudify.aws.relationships.instance_connected_to_security_group target: kubernetes_security_group interfaces: + cloudify.interfaces.lifecycle: + create: + implementation: aws.cloudify_aws.ec2.instance.create + inputs: + args: + placement: { get_secret: availability_zone } + user_data: { get_attribute: [ cloudify_host_cloud_config, cloud_config ] } cloudify.interfaces.monitoring_agent: install: implementation: diamond.diamond_agent.tasks.install @@ -210,24 +181,30 @@ node_templates: type: cloudify.aws.nodes.SecurityGroup properties: aws_config: *aws_config - description: Puppet Group + description: SSH Group rules: - ip_protocol: tcp from_port: 22 to_port: 22 - cidr_ip: { get_input: vpc_cidr_block } + cidr_ip: 0.0.0.0/0 relationships: - type: cloudify.aws.relationships.security_group_contained_in_vpc target: vpc + kubernetes_master_ip: + type: cloudify.aws.nodes.ElasticIP + properties: + aws_config: *aws_config + domain: vpc + public_subnet: type: cloudify.aws.nodes.Subnet properties: aws_config: *aws_config use_external_resource: true - resource_id: { get_input: public_subnet_id } - cidr_block: { get_input: public_subnet_cidr } - availability_zone: { get_input: availability_zone } + resource_id: { get_secret: public_subnet_id } + cidr_block: N/A + availability_zone: N/A relationships: - type: cloudify.aws.relationships.subnet_contained_in_vpc target: vpc @@ -237,9 +214,9 @@ node_templates: properties: aws_config: *aws_config use_external_resource: true - resource_id: { get_input: private_subnet_id } - cidr_block: { get_input: private_subnet_cidr } - availability_zone: { get_input: availability_zone } + resource_id: { get_secret: private_subnet_id } + cidr_block: N/A + availability_zone: N/A relationships: - type: cloudify.aws.relationships.subnet_contained_in_vpc target: vpc @@ -249,21 +226,11 @@ node_templates: properties: aws_config: *aws_config use_external_resource: true - resource_id: { get_input: vpc_id } - cidr_block: { get_input: vpc_cidr_block } - - key: - type: cloudify.aws.nodes.KeyPair - properties: - aws_config: *aws_config - resource_id: { get_input: key_name } - private_key_path: { get_input: private_key_path } - - kubernetes_master_ip: - type: cloudify.aws.nodes.ElasticIP - properties: - aws_config: *aws_config - domain: vpc + resource_id: { get_secret: vpc_id } + cidr_block: N/A + relationships: + - type: cloudify.relationships.depends_on + target: cloudify_host_cloud_config groups: @@ -271,84 +238,6 @@ groups: members: - kubernetes_node_host - - scale_up_group: - members: [kubernetes_node_host] - # This defines a scale group whose members may be scaled up, incrementing by 1. - # The scale worflow is called when the following criteria are met - # The Hyperkube process total CPU will be more than 3 for a total of 10 seconds. - # No more than 6 hosts will be allowed. - policies: - auto_scale_up: - type: scale_policy_type - properties: - policy_operates_on_group: true - scale_limit: 6 - scale_direction: '<' - scale_threshold: 30 - #service_selector: .*kubernetes_node_host.*.cpu.total.user - service_selector: .*kubernetes_node_host.*cpu.total.user - cooldown_time: 60 - triggers: - execute_scale_workflow: - type: cloudify.policies.triggers.execute_workflow - parameters: - workflow: scale - workflow_parameters: - delta: 1 - scalable_entity_name: kubernetes_node - scale_compute: true - - scale_down_group: - members: [kubernetes_node_host] - # This defines a scale group whose members may be scaled up, incrementing by 1. - # The scale worflow is called when the following criteria are met - # The Hyperkube process total CPU will be more than 3 for a total of 10 seconds. - # No more than 6 hosts will be allowed. - policies: - auto_scale_down: - type: scale_policy_type - properties: - policy_operates_on_group: true - scale_limit: 6 - scale_direction: '<' - scale_threshold: 30 - #service_selector: .*kubernetes_node_host.*.cpu.total.user - service_selector: .*kubernetes_node_host.*cpu.total.user - cooldown_time: 60 - triggers: - execute_scale_workflow: - type: cloudify.policies.triggers.execute_workflow - parameters: - workflow: scale - workflow_parameters: - delta: 1 - scalable_entity_name: kubernetes_node - scale_compute: true - - heal_group: - # This defines a group of hosts in members that may be healed. - # The heal workflow is called when a the following policy criteria are met. - # Either the hyperkube process on the host, or the total host CPU need fall silent. - # The host and all software that it is supposed to have running on it will be healed. - members: [kubernetes_node_host] - policies: - simple_autoheal_policy: - type: cloudify.policies.types.host_failure - properties: - service: - - .*kubernetes_node_host.*.cpu.total.system - - .*kubernetes_node_host.*.process.hyperkube.cpu.percent - interval_between_workflows: 60 - triggers: - auto_heal_trigger: - type: cloudify.policies.triggers.execute_workflow - parameters: - workflow: heal - workflow_parameters: - node_instance_id: { 'get_property': [ SELF, node_id ] } - diagnose_value: { 'get_property': [ SELF, diagnose ] } - policies: kubernetes_node_vms_scaling_policy: diff --git a/azure-blueprint.yaml b/azure-blueprint.yaml index 573288b..a8e56df 100644 --- a/azure-blueprint.yaml +++ b/azure-blueprint.yaml @@ -1,15 +1,20 @@ tosca_definitions_version: cloudify_dsl_1_3 description: > - This Blueprint installs the simple Kubernetes cluster on an Azure Cloud environment. + This blueprint creates a Kubernetes Cluster. + It is based on this documentation: https://kubernetes.io/docs/getting-started-guides/kubeadm/ + +# Several lines are commented. Currently there is not a Centos 7 image that supports Cloud Init. When there is, we will replace the current docker/kubernetes installation method with the commented lines. imports: - - http://www.getcloudify.org/spec/cloudify/4.0/types.yaml - - https://raw.githubusercontent.com/cloudify-cosmo/cloudify-azure-plugin/1.4.2/plugin.yaml - - http://www.getcloudify.org/spec/fabric-plugin/1.3.1/plugin.yaml + - http://www.getcloudify.org/spec/cloudify/4.0.1/types.yaml + - https://raw.githubusercontent.com/cloudify-cosmo/cloudify-azure-plugin/1.4.3/plugin.yaml - http://www.getcloudify.org/spec/diamond-plugin/1.3.5/plugin.yaml - types/scale.yaml -# - imports/kubernetes-blueprint.yaml # We use Azure Extensions to install Docker +# - types/cloud_config/cloud-config.yaml + - types/kubernetes.yaml + - imports/kubernetes.yaml +# - imports/cloud-config.yaml inputs: @@ -17,108 +22,35 @@ inputs: default: k8s resource_suffix: - default: '1' - - # Azure account information - - location: - type: string - required: true - default: eastus + default: '0' retry_after: type: integer default: 60 - # Existing manager resources - mgr_resource_group_name: - type: string - required: true - - mgr_virtual_network_name: - type: string - required: true - - mgr_subnet_name: - type: string - required: true - - # Virtual Machine information - - vm_size: - type: string - required: true - default: Standard_A0 - - vm_os_family: - type: string - required: true - default: linux + size: + default: Standard_A3 - vm_image_publisher: - type: string - required: true - default: Canonical - - vm_image_offer: - type: string - required: true - default: UbuntuServer - - vm_image_sku: - type: string - required: true - default: 14.04.4-LTS - - vm_image_version: - type: string - required: true - default: 14.04.201604060 + image: + default: + publisher: OpenLogic + offer: CentOS + sku: '7.3' + version: latest agent_user: - description: > - Username to create as the VM's administrator user - type: string - required: true - default: cloudify - - vm_os_password: - description: > - Password to use for the VM's administrator user - type: string - required: true - default: Cl0ud1fy! - - agent_user_public_key_data: - default: ssh-rsa AAAAA3----your-key-here----aabbzz - - vm_os_pubkeys: + description: The user name of the agent on the instance created from the image. + default: docker # currently this is required + + ssh_public_keys: description: the public key default: - path: {concat:[ '/home/', { get_input: agent_user }, '/.ssh/authorized_keys' ]} - keyData: { get_input: agent_user_public_key_data } + keyData: { get_secret: agent_key_public } - vm_os_pubkey_auth_only: + encode_cloud_config: default: true - # Application information - - webserver_port: - description: The external web server port - default: 8080 - - private_key_path: - description: > - This is the private key that matches the public key in input agent_user_public_key_data. - default: /home/cloudify/.ssh/id_rsa - - agent_config: - default: - user: { get_input: agent_user } - key: { get_input: private_key_path } - install_method: remote - min_workers: 2 - dsl_definitions: azure_config: &azure_config @@ -129,65 +61,10 @@ dsl_definitions: node_templates: - kubernetes_master: - type: cloudify.nodes.SoftwareComponent - interfaces: - cloudify.interfaces.lifecycle: - start: - implementation: fabric.fabric_plugin.tasks.run_commands - inputs: - fabric_env: - host_string: { get_attribute: [ kubernetes_master_host, ip ] } - user: { get_input: agent_user } - key_filename: { get_input: private_key_path } - commands: - - "curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" - - "chmod +x kubectl" - - "rm -rf kube-deploy" - - "curl -L https://github.com/kubernetes/kube-deploy/archive/master.tar.gz | tar xz && cd kube-deploy-master/docker-multinode;sudo ./master.sh" - relationships: - - type: cloudify.relationships.depends_on - target: kubernetes_master_docker - - type: cloudify.relationships.contained_in - target: kubernetes_master_host - - kubernetes_node: - type: cloudify.nodes.SoftwareComponent - interfaces: - cloudify.interfaces.lifecycle: - start: - implementation: fabric.fabric_plugin.tasks.run_commands - inputs: - fabric_env: - host_string: { get_attribute: [ kubernetes_node_host, ip ] } - user: { get_input: agent_user } - key_filename: { get_input: private_key_path } - commands: - - "rm -rf kube-deploy" - - { concat: [ "curl -L https://github.com/kubernetes/kube-deploy/archive/master.tar.gz | tar xz && cd kube-deploy-master/docker-multinode;sudo MASTER_IP=", { get_attribute: [ kubernetes_master_host, ip ] }," ./worker.sh" ] } - relationships: - - type: cloudify.relationships.depends_on - target: kubernetes_master - - type: cloudify.relationships.contained_in - target: kubernetes_node_host - - kubectl: - # For convenience, we install the kubectl on your master. - type: cloudify.nodes.Root - interfaces: - cloudify.interfaces.lifecycle: - create: - implementation: scripts/kubectl.py - inputs: - kubectl_url: 'http://storage.googleapis.com/kubernetes-release/release/v1.0.1/bin/linux/amd64/kubectl' - relationships: - - type: cloudify.relationships.contained_in - target: kubernetes_master_host - kubernetes_master_docker: type: cloudify.azure.nodes.compute.VirtualMachineExtension properties: - location: { get_input: location } + location: { get_secret: location } retry_after: { get_input: retry_after } azure_config: *azure_config interfaces: @@ -196,11 +73,28 @@ node_templates: inputs: resource_config: publisher: Microsoft.Azure.Extensions - type: DockerExtension - typeHandlerVersion: '1.0' + type: CustomScript + typeHandlerVersion: '2.0' autoUpgradeMinorVersion: true - settings: {} - protectedSettings: {} + settings: + commandToExecute: + concat: + - | + cat < /etc/yum.repos.d/kubernetes.repo + [kubernetes] + name=Kubernetes + baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 + enabled=1 + gpgcheck=1 + repo_gpgcheck=1 + gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg + https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg + EOF + setenforce 0 + - | + yum -t -y install docker-1.12.6-28.git1398f24.el7.centos kubelet-1.6.4-0 kubeadm-1.6.4-0 kubectl-1.6.4-0 kubernetes-cni-0.5.1-0 + systemctl enable docker && systemctl start docker + systemctl enable kubelet && systemctl start kubelet relationships: - type: cloudify.azure.relationships.vmx_contained_in_vm target: kubernetes_master_host @@ -208,7 +102,7 @@ node_templates: kubernetes_node_docker: type: cloudify.azure.nodes.compute.VirtualMachineExtension properties: - location: { get_input: location } + location: { get_secret: location } retry_after: { get_input: retry_after } azure_config: *azure_config interfaces: @@ -217,11 +111,28 @@ node_templates: inputs: resource_config: publisher: Microsoft.Azure.Extensions - type: DockerExtension - typeHandlerVersion: '1.0' + type: CustomScript + typeHandlerVersion: '2.0' autoUpgradeMinorVersion: true - settings: {} - protectedSettings: {} + settings: + commandToExecute: + concat: + - | + cat < /etc/yum.repos.d/kubernetes.repo + [kubernetes] + name=Kubernetes + baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 + enabled=1 + gpgcheck=1 + repo_gpgcheck=1 + gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg + https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg + EOF + setenforce 0 + - | + yum -t -y install docker-1.12.6-28.git1398f24.el7.centos kubelet-1.6.4-0 kubeadm-1.6.4-0 kubectl-1.6.4-0 kubernetes-cni-0.5.1-0 + systemctl enable docker && systemctl start docker + systemctl enable kubelet && systemctl start kubelet relationships: - type: cloudify.azure.relationships.vmx_contained_in_vm target: kubernetes_node_host @@ -229,27 +140,19 @@ node_templates: kubernetes_master_host: type: cloudify.azure.nodes.compute.VirtualMachine properties: - azure_config: *azure_config - location: { get_input: location } + location: { get_secret: location } retry_after: { get_input: retry_after } - os_family: { get_input: vm_os_family } + os_family: linux + azure_config: *azure_config resource_config: - hardwareProfile: - vmSize: { get_input: vm_size } - storageProfile: - imageReference: - publisher: { get_input: vm_image_publisher } - offer: { get_input: vm_image_offer } - sku: { get_input: vm_image_sku } - version: { get_input: vm_image_version } - osProfile: - adminUsername: { get_input: agent_user } - adminPassword: { get_input: vm_os_password } - linuxConfiguration: - ssh: - publicKeys: { get_input: vm_os_pubkeys } - disablePasswordAuthentication: { get_input: vm_os_pubkey_auth_only } - agent_config: { get_input: agent_config } + hardwareProfile: {} + storageProfile: {} + osProfile: {} + agent_config: + user: { get_input: agent_user } + install_method: remote + port: 22 + key: { get_secret: agent_key_private } relationships: - type: cloudify.azure.relationships.contained_in_resource_group target: resource_group @@ -259,31 +162,41 @@ node_templates: target: availability_set - type: cloudify.azure.relationships.connected_to_nic target: kubernetes_master_host_nic + interfaces: + cloudify.interfaces.lifecycle: + create: + implementation: pkg.cloudify_azure.resources.compute.virtualmachine.create + inputs: + args: + hardwareProfile: + vmSize: { get_input: size } + storageProfile: + imageReference: { get_input: image} + osProfile: + adminUsername: { get_input: agent_user } + adminPassword: '' + # customData: { get_attribute: [ cloudify_host_cloud_config, cloud_config ] } + linuxConfiguration: + ssh: + publicKeys: { get_input: ssh_public_keys } + disablePasswordAuthentication: true kubernetes_node_host: type: cloudify.azure.nodes.compute.VirtualMachine properties: - azure_config: *azure_config - location: { get_input: location } + location: { get_secret: location } retry_after: { get_input: retry_after } - os_family: { get_input: vm_os_family } + os_family: linux + azure_config: *azure_config resource_config: - hardwareProfile: - vmSize: { get_input: vm_size } - storageProfile: - imageReference: - publisher: { get_input: vm_image_publisher } - offer: { get_input: vm_image_offer } - sku: { get_input: vm_image_sku } - version: { get_input: vm_image_version } - osProfile: - adminUsername: { get_input: agent_user } - adminPassword: { get_input: vm_os_password } - linuxConfiguration: - ssh: - publicKeys: { get_input: vm_os_pubkeys } - disablePasswordAuthentication: { get_input: vm_os_pubkey_auth_only } - agent_config: { get_input: agent_config } + hardwareProfile: {} + storageProfile: {} + osProfile: {} + agent_config: + user: { get_input: agent_user } + install_method: remote + port: 22 + key: { get_secret: agent_key_private } relationships: - type: cloudify.azure.relationships.contained_in_resource_group target: resource_group @@ -294,6 +207,23 @@ node_templates: - type: cloudify.azure.relationships.connected_to_nic target: kubernetes_node_host_nic interfaces: + cloudify.interfaces.lifecycle: + create: + implementation: pkg.cloudify_azure.resources.compute.virtualmachine.create + inputs: + args: + hardwareProfile: + vmSize: { get_input: size } + storageProfile: + imageReference: { get_input: image} + osProfile: + adminUsername: { get_input: agent_user } + adminPassword: '' +# customData: { get_attribute: [ cloudify_host_cloud_config, cloud_config ] } + linuxConfiguration: + ssh: + publicKeys: { get_input: ssh_public_keys } + disablePasswordAuthentication: true cloudify.interfaces.monitoring_agent: install: implementation: diamond.diamond_agent.tasks.install @@ -318,54 +248,11 @@ node_templates: hyperkube: name: hyperkube - resource_group: - type: cloudify.azure.nodes.ResourceGroup - properties: - name: {concat:[{get_input: resource_prefix},arg,{get_input: resource_suffix}]} - location: { get_input: location } - azure_config: *azure_config - - storage_account: - type: cloudify.azure.nodes.storage.StorageAccount - properties: - location: { get_input: location } - azure_config: *azure_config - retry_after: { get_input: retry_after } - resource_config: - accountType: Standard_LRS - relationships: - - type: cloudify.azure.relationships.contained_in_resource_group - target: resource_group - - virtual_network: - type: cloudify.azure.nodes.network.VirtualNetwork - properties: - resource_group_name: { get_input: mgr_resource_group_name } - name: { get_input: mgr_virtual_network_name } - azure_config: *azure_config - use_external_resource: true - location: { get_input: location } - relationships: - - type: cloudify.azure.relationships.contained_in_resource_group - target: resource_group - - subnet: - type: cloudify.azure.nodes.network.Subnet - properties: - resource_group_name: { get_input: mgr_resource_group_name } - name: { get_input: mgr_subnet_name } - azure_config: *azure_config - use_external_resource: true - location: { get_input: location } - relationships: - - type: cloudify.azure.relationships.contained_in_virtual_network - target: virtual_network - network_security_group: type: cloudify.azure.nodes.network.NetworkSecurityGroup properties: name: {concat:[{get_input: resource_prefix},nsg,{get_input: resource_suffix}]} - location: { get_input: location } + location: { get_secret: location } azure_config: *azure_config retry_after: { get_input: retry_after } resource_config: @@ -510,7 +397,7 @@ node_templates: type: cloudify.azure.nodes.compute.AvailabilitySet properties: name: {concat:[{get_input: resource_prefix},availset,{get_input: resource_suffix}]} - location: { get_input: location } + location: { get_secret: location } azure_config: *azure_config retry_after: { get_input: retry_after } relationships: @@ -520,7 +407,7 @@ node_templates: kubernetes_node_host_nic: type: cloudify.azure.nodes.network.NetworkInterfaceCard properties: - location: { get_input: location } + location: { get_secret: location } azure_config: *azure_config retry_after: { get_input: retry_after } relationships: @@ -534,7 +421,7 @@ node_templates: kubernetes_master_host_nic: type: cloudify.azure.nodes.network.NetworkInterfaceCard properties: - location: { get_input: location } + location: { get_secret: location } azure_config: *azure_config retry_after: { get_input: retry_after } relationships: @@ -548,7 +435,7 @@ node_templates: kubernetes_node_host_nic_ip_cfg: type: cloudify.azure.nodes.network.IPConfiguration properties: - location: { get_input: location } + location: { get_secret: location } azure_config: *azure_config retry_after: { get_input: retry_after } resource_config: @@ -562,7 +449,7 @@ node_templates: kubernetes_master_host_nic_ip_cfg: type: cloudify.azure.nodes.network.IPConfiguration properties: - location: { get_input: location } + location: { get_secret: location } azure_config: *azure_config retry_after: { get_input: retry_after } resource_config: @@ -576,7 +463,7 @@ node_templates: kubernetes_master_ip: type: cloudify.azure.nodes.network.PublicIPAddress properties: - location: { get_input: location } + location: { get_secret: location } azure_config: *azure_config retry_after: { get_input: retry_after } resource_config: @@ -585,11 +472,51 @@ node_templates: - type: cloudify.azure.relationships.contained_in_resource_group target: resource_group -########################################################### -# This outputs section exposes the application endpoint. -# You can access it by running: -# - cfy deployments -d outputs -########################################################### + subnet: + type: cloudify.azure.nodes.network.Subnet + properties: + resource_group_name: { get_secret: mgr_resource_group_name } + name: { get_secret: mgr_subnet_name } + azure_config: *azure_config + use_external_resource: true + location: { get_secret: location } + relationships: + - type: cloudify.azure.relationships.contained_in_virtual_network + target: virtual_network + + virtual_network: + type: cloudify.azure.nodes.network.VirtualNetwork + properties: + resource_group_name: { get_secret: mgr_resource_group_name } + name: { get_secret: mgr_virtual_network_name } + azure_config: *azure_config + use_external_resource: true + location: { get_secret: location } + relationships: + - type: cloudify.azure.relationships.contained_in_resource_group + target: resource_group + + storage_account: + type: cloudify.azure.nodes.storage.StorageAccount + properties: + location: { get_secret: location } + azure_config: *azure_config + retry_after: { get_input: retry_after } + resource_config: + accountType: Standard_LRS + relationships: + - type: cloudify.azure.relationships.contained_in_resource_group + target: resource_group + + resource_group: + type: cloudify.azure.nodes.ResourceGroup + properties: + name: {concat:[{get_input: resource_prefix},arg,{get_input: resource_suffix}]} + location: { get_secret: location } + azure_config: *azure_config +# relationships: +# - type: cloudify.relationships.depends_on +# target: cloudify_host_cloud_config groups: @@ -599,83 +526,6 @@ groups: - kubernetes_node_host_nic - kubernetes_node_host - scale_up_group: - members: [kubernetes_node_host] - # This defines a scale group whose members may be scaled up, incrementing by 1. - # The scale worflow is called when the following criteria are met - # The Hyperkube process total CPU will be more than 3 for a total of 10 seconds. - # No more than 6 hosts will be allowed. - policies: - auto_scale_up: - type: scale_policy_type - properties: - policy_operates_on_group: true - scale_limit: 6 - scale_direction: '<' - scale_threshold: 30 - #service_selector: .*kubernetes_node_host.*.cpu.total.user - service_selector: .*kubernetes_node_host.*cpu.total.user - cooldown_time: 60 - triggers: - execute_scale_workflow: - type: cloudify.policies.triggers.execute_workflow - parameters: - workflow: scale - workflow_parameters: - delta: 1 - scalable_entity_name: kubernetes_node - scale_compute: true - - scale_down_group: - members: [kubernetes_node_host] - # This defines a scale group whose members may be scaled up, incrementing by 1. - # The scale worflow is called when the following criteria are met - # The Hyperkube process total CPU will be more than 3 for a total of 10 seconds. - # No more than 6 hosts will be allowed. - policies: - auto_scale_down: - type: scale_policy_type - properties: - policy_operates_on_group: true - scale_limit: 6 - scale_direction: '<' - scale_threshold: 30 - #service_selector: .*kubernetes_node_host.*.cpu.total.user - service_selector: .*kubernetes_node_host.*cpu.total.user - cooldown_time: 60 - triggers: - execute_scale_workflow: - type: cloudify.policies.triggers.execute_workflow - parameters: - workflow: scale - workflow_parameters: - delta: 1 - scalable_entity_name: kubernetes_node - scale_compute: true - - heal_group: - # This defines a group of hosts in members that may be healed. - # The heal workflow is called when a the following policy criteria are met. - # Either the hyperkube process on the host, or the total host CPU need fall silent. - # The host and all software that it is supposed to have running on it will be healed. - members: [kubernetes_node_host] - policies: - simple_autoheal_policy: - type: cloudify.policies.types.host_failure - properties: - service: - - .*kubernetes_node_host.*.cpu.total.system - - .*kubernetes_node_host.*.process.hyperkube.cpu.percent - interval_between_workflows: 60 - triggers: - auto_heal_trigger: - type: cloudify.policies.triggers.execute_workflow - parameters: - workflow: heal - workflow_parameters: - node_instance_id: { 'get_property': [ SELF, node_id ] } - diagnose_value: { 'get_property': [ SELF, diagnose ] } - policies: kubernetes_node_vms_scaling_policy: diff --git a/blueprint.png b/blueprint.png deleted file mode 100644 index 05fc5e1..0000000 Binary files a/blueprint.png and /dev/null differ diff --git a/bmc-blueprint.yaml b/bmc-blueprint.yaml deleted file mode 100644 index 39b6a6e..0000000 --- a/bmc-blueprint.yaml +++ /dev/null @@ -1,298 +0,0 @@ -tosca_definitions_version: cloudify_dsl_1_3 - -imports: - - http://www.getcloudify.org/spec/cloudify/3.4/types.yaml - - http://www.getcloudify.org/spec/fabric-plugin/1.3.1/plugin.yaml - - http://www.getcloudify.org/spec/diamond-plugin/1.3.3/plugin.yaml - - https://raw.githubusercontent.com/cloudify-incubator/cloudify-oraclebmc-plugin/master/plugin.yaml - - types/scale.yaml - -dsl_definitions: - bmc_config: &bmc_config - user: - fingerprint: - key_file: - tenancy: - region: - - hyperkube_monitoring: &hyperkube_monitoring - collectors_config: - CPUCollector: {} - MemoryCollector: {} - LoadAverageCollector: {} - DiskUsageCollector: - config: - devices: sd[a-z]+[0-9]*$ - NetworkCollector: {} - ProcessResourcesCollector: - config: - enabled: true - unit: B - measure_collector_time: true - interval: 1 - process: - hyperkube: - name: hyperkube - -inputs: - ssh_user: - default: opc - ssh_keyfile: - default: '' - master_key: - default: '' - worker_key: - default: '' - master_image: - description: image (must be Oracle Linux) - master_shape: - description: flavor - worker_image: - description: image - worker_shape: - description: flavor - availability_domain: - description: availability domain - -node_types: - fabric_host: - derived_from: cloudify.oraclebmc.nodes.Instance - properties: - ssh_keyfile: - type: string - default: { get_input: ssh_keyfile } - -node_templates: - - master: - type: cloudify.nodes.SoftwareComponent - interfaces: - cloudify.interfaces.lifecycle: - start: - implementation: fabric.fabric_plugin.tasks.run_task - inputs: - fabric_env: - host_string: { get_attribute: [ master_host, public_ip ] } - user: { get_input: ssh_user } - key_filename: { get_input: ssh_keyfile } - tasks_file: scripts/fabric_tasks.py - task_name: start_master_bmc - task_properties: - k8s_settings: - k8s_version: v1.3.0 - etcd_version: 2.2.5 - flannel_version: v0.6.2 - flannel_ipmasq: 'true' - flannel_network: 10.1.0.0/16 - flannel_backend: udp - restart_policy: unless-stopped - arch: amd64 - net_interface: eth0 - relationships: - - type: cloudify.relationships.contained_in - target: master_host - - worker: - type: cloudify.nodes.SoftwareComponent - interfaces: - cloudify.interfaces.lifecycle: - start: - implementation: fabric.fabric_plugin.tasks.run_task - inputs: - fabric_env: - host_string: { get_attribute: [ worker_host, public_ip ] } - user: { get_input: ssh_user } - key_filename: { get_input: ssh_keyfile } - tasks_file: scripts/fabric_tasks.py - task_name: start_worker_bmc - task_properties: - master_ip: { get_attribute: [ master_host, ip ] } - k8s_settings: - k8s_version: v1.3.0 - etcd_version: 2.2.5 - flannel_version: v0.6.2 - flannel_ipmasq: 'true' - flannel_network: 10.1.0.0/16 - flannel_backend: udp - restart_policy: unless-stopped - arch: amd64 - net_interface: eth0 - relationships: - - type: cloudify.relationships.depends_on - target: master - - type: cloudify.relationships.contained_in - target: worker_host - - master_host: - type: fabric_host - properties: - agent_config: - install_method: remote - bmc_config: *bmc_config - ssh_keyfile: { get_input: master_key} - name: master - public_key_file: - image_id: { get_input: master_image } - instance_shape: { get_input: master_shape } - compartment_id: - availability_domain: { get_input: availability_domain } - relationships: - - type: cloudify.oraclebmc.relationships.instance_connected_to_subnet - target: subnet - - worker_host: - type: fabric_host - properties: - agent_config: - install_method: remote - bmc_config: *bmc_config - ssh_keyfile: { get_input: worker_key} - name: worker - public_key_file: - image_id: { get_input: worker_image } - instance_shape: { get_input: worker_shape } - compartment_id: - availability_domain: { get_input: availability_domain } - relationships: - - type: cloudify.oraclebmc.relationships.instance_connected_to_subnet - target: subnet - interfaces: - cloudify.interfaces.monitoring_agent: - install: - implementation: diamond.diamond_agent.tasks.install - inputs: - diamond_config: - interval: 1 - start: diamond.diamond_agent.tasks.start - stop: diamond.diamond_agent.tasks.stop - uninstall: diamond.diamond_agent.tasks.uninstall - cloudify.interfaces.monitoring: - start: - implementation: diamond.diamond_agent.tasks.add_collectors - inputs: - <<: *hyperkube_monitoring - - network: - type: cloudify.oraclebmc.nodes.VCN - properties: - bmc_config: *bmc_config - use_external_resource: true - resource_id: - - subnet: - type: cloudify.oraclebmc.nodes.Subnet - properties: - bmc_config: *bmc_config - name: kubernetes_subnet - compartment_id: - cidr_block: 10.10.20.0/24 - availability_domain: - security_rules: - - "0.0.0.0/0,22" - - "0.0.0.0/0,53" - - "0.0.0.0/0,53,udp" - - "0.0.0.0/0,443" - - "0.0.0.0/0,8080" - - "10.10.20.0/24,2379" - - "10.10.20.0/24,4001" - - "10.10.20.0/24,6443" - - "10.10.20.0/24,8000" - - "10.10.20.0/24,9090" - - "10.10.20.0/24,10250" - relationships: - - type: cloudify.oraclebmc.relationships.subnet_in_network - target: network - - gateway: - type: cloudify.oraclebmc.nodes.Gateway - properties: - resource_id: - use_external_resource: true - bmc_config: *bmc_config - relationships: - - type: cloudify.oraclebmc.relationships.gateway_connected_to_network - target: network - -groups: - - scale_up_group: - members: [worker_host] - # This defines a scale group whose members may be scaled up, incrementing by 1. - # The scale worflow is called when the following criteria are met - # The Hyperkube process total CPU will be more than 3 for a total of 10 seconds. - # No more than 6 hosts will be allowed. - policies: - auto_scale_up: - type: scale_policy_type - properties: - policy_operates_on_group: true - scale_limit: 6 - scale_direction: '<' - scale_threshold: 30 - service_selector: .*worker_host.*cpu.total.user - cooldown_time: 60 - triggers: - execute_scale_workflow: - type: cloudify.policies.triggers.execute_workflow - parameters: - workflow: scale - workflow_parameters: - delta: 1 - scalable_entity_name: worker - scale_compute: true - - scale_down_group: - # This defines a scale group whose members may be scaled down. Only one host will be removed per run. - # The scale worflow is called when the following criteria are met - # The Hyperkube process total CPU will be less than 1 for a total of 200 seconds. - # No less than 2 hosts will be allowed. - members: [worker_host] - policies: - auto_scale_down: - type: scale_policy_type - properties: - scale_limit: 2 - scale_direction: '>' - scale_threshold: 25 - #service_selector: .*worker_host.*.process.hyperkube.cpu.percent - service_selector: .*worker_host.*cpu.total.user - cooldown_time: 60 - moving_window_size: 30 - triggers: - execute_scale_workflow: - type: cloudify.policies.triggers.execute_workflow - parameters: - workflow: scale - workflow_parameters: - delta: -1 - scalable_entity_name: worker - scale_compute: true - - heal_group: - # This defines a group of hosts in members that may be healed. - # The heal workflow is called when a the following policy criteria are met. - # Either the hyperkube process on the host, or the total host CPU need fall silent. - # The host and all software that it is supposed to have running on it will be healed. - members: [worker_host] - policies: - simple_autoheal_policy: - type: cloudify.policies.types.host_failure - properties: - service: - - .*worker_host.*.cpu.total.system - - .*worker_host.*.process.hyperkube.cpu.percent - interval_between_workflows: 60 - triggers: - auto_heal_trigger: - type: cloudify.policies.triggers.execute_workflow - parameters: - workflow: heal - workflow_parameters: - node_instance_id: { 'get_property': [ SELF, node_id ] } - diagnose_value: { 'get_property': [ SELF, diagnose ] } - -outputs: - kubernetes_info: - description: Kubernetes Dashboard URL - value: - url: {concat: ["http://",{ get_attribute: [ master_host, public_ip ]},":8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard" ] } diff --git a/imports/cloud-config.yaml b/imports/cloud-config.yaml new file mode 100644 index 0000000..cd2723a --- /dev/null +++ b/imports/cloud-config.yaml @@ -0,0 +1,47 @@ +node_templates: + + cloudify_host_cloud_config: + type: cloudify.nodes.CloudConfig + properties: + resource_config: + encode_base64: { get_input: encode_cloud_config } + interfaces: + cloudify.interfaces.lifecycle: + create: + inputs: + cloud_config: + groups: + - docker + users: + - name: { get_input: agent_user } + primary-group: wheel + groups: docker + shell: /bin/bash + sudo: ['ALL=(ALL) NOPASSWD:ALL'] + ssh-authorized-keys: + - { get_secret: agent_key_public } + write_files: + - path: /etc/yum.repos.d/kubernetes.repo + owner: root:root + permissions: '0444' + content: | + # installed by cloud-init + [kubernetes] + name=Kubernetes + baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 + enabled=1 + gpgcheck=1 + repo_gpgcheck=1 + gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg + https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg + packages: + - [docker, 1.12.6-28.git1398f24.el7.centos] + - [kubelet, 1.6.4-0] + - [kubeadm, 1.6.4-0] + - [kubectl, 1.6.4-0] + - [kubernetes-cni, 0.5.1-0] + runcmd: + - [ systemctl, enable, docker ] + - [ systemctl, start, docker ] + - [ systemctl, enable, kubelet ] + - [ systemctl, start, kubelet ] diff --git a/imports/kubernetes-blueprint.yaml b/imports/kubernetes-blueprint.yaml deleted file mode 100644 index c6e19d1..0000000 --- a/imports/kubernetes-blueprint.yaml +++ /dev/null @@ -1,79 +0,0 @@ -tosca_definitions_version: cloudify_dsl_1_3 - -node_templates: - - kubernetes_master: - type: cloudify.nodes.SoftwareComponent - interfaces: - cloudify.interfaces.lifecycle: - create: - implementation: scripts/docker_install.py # Install Docker if not already installed. - start: - implementation: fabric.fabric_plugin.tasks.run_task - inputs: - fabric_env: - host_string: { get_attribute: [ kubernetes_master_host, ip ] } - user: { get_input: agent_user } - key_filename: { get_input: private_key_path } - tasks_file: scripts/fabric_tasks.py - task_name: start_master - task_properties: - k8s_settings: - k8s_version: v1.3.0 - etcd_version: 2.2.5 - flannel_version: v0.6.2 - flannel_ipmasq: 'true' - flannel_network: 10.1.0.0/16 - flannel_backend: udp - restart_policy: unless-stopped - arch: amd64 - net_interface: eth0 - relationships: - - type: cloudify.relationships.contained_in - target: kubernetes_master_host - - kubernetes_node: - type: cloudify.nodes.SoftwareComponent - interfaces: - cloudify.interfaces.lifecycle: - create: - implementation: scripts/docker_install.py # Install Docker if not already installed. - start: - implementation: fabric.fabric_plugin.tasks.run_task - inputs: - fabric_env: - host_string: { get_attribute: [ kubernetes_node_host, ip ] } - user: { get_input: agent_user } - key_filename: { get_input: private_key_path } - tasks_file: scripts/fabric_tasks.py - task_name: start_worker - task_properties: - master_ip: { get_attribute: [ kubernetes_master_host, ip ] } - k8s_settings: - k8s_version: v1.3.0 - etcd_version: 2.2.5 - flannel_version: v0.6.2 - flannel_ipmasq: 'true' - flannel_network: 10.1.0.0/16 - flannel_backend: udp - restart_policy: unless-stopped - arch: amd64 - net_interface: eth0 - relationships: - - type: cloudify.relationships.depends_on - target: kubernetes_master - - type: cloudify.relationships.contained_in - target: kubernetes_node_host - - kubectl: - # For convenience, we install the kubectl on your master. - type: cloudify.nodes.Root - interfaces: - cloudify.interfaces.lifecycle: - create: - implementation: scripts/kubectl.py - inputs: - kubectl_url: 'http://storage.googleapis.com/kubernetes-release/release/v1.0.1/bin/linux/amd64/kubectl' - relationships: - - type: cloudify.relationships.contained_in - target: kubernetes_master_host diff --git a/imports/kubernetes.yaml b/imports/kubernetes.yaml new file mode 100644 index 0000000..4f49283 --- /dev/null +++ b/imports/kubernetes.yaml @@ -0,0 +1,81 @@ +node_templates: + + kubernetes_master: + type: cloudify.nodes.Kubernetes.Master + relationships: + - type: cloudify.relationships.contained_in + target: kubernetes_master_host + + kubernetes_node: + type: cloudify.nodes.Kubernetes.Node + relationships: + - type: cloudify.relationships.contained_in + target: kubernetes_node_host + - type: cloudify.relationships.depends_on + target: kubernetes_master + +groups: + + scale_up_group: + members: [kubernetes_node_host] + policies: + auto_scale_up: + type: scale_policy_type + properties: + policy_operates_on_group: true + scale_limit: 6 + scale_direction: '<' + scale_threshold: 30 + service_selector: .*kubernetes_node_host.*cpu.total.user + cooldown_time: 60 + triggers: + execute_scale_workflow: + type: cloudify.policies.triggers.execute_workflow + parameters: + workflow: scale + workflow_parameters: + delta: 1 + scalable_entity_name: kubernetes_node + scale_compute: true + + scale_down_group: + members: [kubernetes_node_host] + policies: + auto_scale_down: + type: scale_policy_type + properties: + policy_operates_on_group: true + scale_limit: 6 + scale_direction: '<' + scale_threshold: 30 + #service_selector: .*kubernetes_node_host.*.cpu.total.user + service_selector: .*kubernetes_node_host.*cpu.total.user + cooldown_time: 60 + triggers: + execute_scale_workflow: + type: cloudify.policies.triggers.execute_workflow + parameters: + workflow: scale + workflow_parameters: + delta: 1 + scalable_entity_name: kubernetes_node + scale_compute: true + + heal_group: + members: [kubernetes_node_host] + policies: + simple_autoheal_policy: + type: cloudify.policies.types.host_failure + properties: + service: + - .*kubernetes_node_host.*.cpu.total.system + - .*kubernetes_node_host.*.process.hyperkube.cpu.percent + interval_between_workflows: 60 + triggers: + auto_heal_trigger: + type: cloudify.policies.triggers.execute_workflow + parameters: + workflow: heal + workflow_parameters: + node_instance_id: { 'get_property': [ SELF, node_id ] } + diagnose_value: { 'get_property': [ SELF, diagnose ] } diff --git a/inputs/aws.yaml.example b/inputs/aws.yaml.example deleted file mode 100644 index ef8dc44..0000000 --- a/inputs/aws.yaml.example +++ /dev/null @@ -1,11 +0,0 @@ -vpc_id: vpc-829588e6 -vpc_cidr_block: 172.16.0.0/16 -public_subnet_id: subnet-d6ed089f -public_subnet_cidr: 172.16.122.0/24 -private_subnet_id: subnet-e9ed08a0 -private_subnet_cidr: 172.16.123.0/24 -ec2_region_name: eu-west-1 -ec2_region_endpoint: ec2.eu-west-1.amazonaws.com -availability_zone: eu-west-1a -ami: ami-b9b394ca -instance_type: m3.medium diff --git a/inputs/azure.yaml.example b/inputs/azure.yaml.example deleted file mode 100644 index a9edb9f..0000000 --- a/inputs/azure.yaml.example +++ /dev/null @@ -1,9 +0,0 @@ -# ################################### -# Azure -# Example Inputs file for azure-blueprint.yaml - -# These values are those of your manager. -mgr_resource_group_name: '' # The ID of the resource group that your manager VM is deployed in. -mgr_virtual_network_name: '' # The ID of the virtual network that your manager VM private IP NIC is on. -mgr_subnet_name: '' # The ID of the subnet that your manager VM private IP NIC is on. -agent_user_public_key_data: "" # The public key material diff --git a/inputs/openstack.yaml.example b/inputs/openstack.yaml.example deleted file mode 100644 index 99a59a5..0000000 --- a/inputs/openstack.yaml.example +++ /dev/null @@ -1,9 +0,0 @@ -image: 3edda9cf-11fd-4e4a-8a51-f58b9ad593c2 -flavor: 8f4b7ae1-b8c2-431f-bb0c-362a5ece0381 -agent_user: ubuntu -region: sal01 -router_name: openstack-example-network-router -public_network_name: openstack-example-network-name -public_subnet_name: openstack-example-network-subnet -private_network_name: example-openstack-private-network-name -private_subnet_name: example-openstack-private-network-subnet diff --git a/openstack-blueprint.yaml b/openstack-blueprint.yaml index ddb299e..a93376f 100644 --- a/openstack-blueprint.yaml +++ b/openstack-blueprint.yaml @@ -1,90 +1,44 @@ -########################################################### -# This Blueprint installs Kubernetes on Openstack -########################################################### - tosca_definitions_version: cloudify_dsl_1_3 description: > This blueprint creates a Kubernetes Cluster. - It includes a master and two or more nodes with auto-scaling and auto-healing of the nodes. - It is based on the Kubernetes Portable Multi-Node Cluster guide in the Kubernetes documentation website. - https://kubernetes.io/docs/getting-started-guides/docker-multinode/ + It is based on this documentation: https://kubernetes.io/docs/getting-started-guides/kubeadm/ imports: - - http://www.getcloudify.org/spec/cloudify/4.0/types.yaml + - http://www.getcloudify.org/spec/cloudify/4.0.1/types.yaml - http://www.getcloudify.org/spec/openstack-plugin/2.0.1/plugin.yaml - http://www.getcloudify.org/spec/fabric-plugin/1.3.1/plugin.yaml - http://www.getcloudify.org/spec/diamond-plugin/1.3.5/plugin.yaml - types/scale.yaml - - imports/kubernetes-blueprint.yaml + - types/cloud_config/cloud-config.yaml + - types/kubernetes.yaml + - imports/kubernetes.yaml + - imports/cloud-config.yaml inputs: image: - description: Image to be used when launching agent VM's + description: Image to be used when launching agent VMs flavor: - description: Flavor of the agent VM's + description: Flavor of the agent VMs agent_user: description: > - User for connecting to agent VM's - default: ubuntu - - key_name: - default: kubernetes-blueprint-key - - private_key_path: - default: ~/.ssh/kubernetes-blueprint-key.pem - - external_network_name: - default: external - - router_name: - description: The Router Name - - public_network_name: - description: The name of the Openstack public network. + User for connecting to agent VMs + default: centos - public_subnet_name: - description: The name of the public network subnet. - - private_network_name: - description: The name of the Openstack private network. - - private_subnet_name: - description: The name of the private network subnet. - - region: - default: '' + encode_cloud_config: + default: false dsl_definitions: - hyperkube_monitoring: &hyperkube_monitoring - collectors_config: - CPUCollector: {} - MemoryCollector: {} - LoadAverageCollector: {} - DiskUsageCollector: - config: - devices: x?vd[a-z]+[0-9]*$ - NetworkCollector: {} - ProcessResourcesCollector: - config: - enabled: true - unit: B - measure_collector_time: true - cpu_interval: 0.5 - process: - hyperkube: - name: hyperkube - openstack_config: &openstack_config username: { get_secret: keystone_username } password: { get_secret: keystone_password } tenant_name: { get_secret: keystone_tenant_name } auth_url: { get_secret: keystone_url } - region: { get_input: region } + region: { get_secret: region } node_templates: @@ -93,44 +47,84 @@ node_templates: properties: openstack_config: *openstack_config agent_config: - install_method: remote - user: { get_input: agent_user } - min_workers: 2 - key: { get_property: [ key, private_key_path ] } + user: { get_input: agent_user } + install_method: remote + port: 22 + key: { get_secret: agent_key_private } server: - image: { get_input: image } - flavor: { get_input: flavor } - userdata: | - #!/bin/bash - sudo groupadd docker - sudo gpasswd -a ubuntu docker + key_name: '' + image: '' + flavor: '' management_network_name: { get_property: [ public_network, resource_id ] } + interfaces: + cloudify.interfaces.lifecycle: + create: + inputs: + args: + image: { get_input: image } + flavor: { get_input: flavor } + userdata: { get_attribute: [ cloudify_host_cloud_config, cloud_config ] } relationships: - - target: key - type: cloudify.openstack.server_connected_to_keypair - target: kubernetes_master_port type: cloudify.openstack.server_connected_to_port - kubernetes_master_port: - type: cloudify.openstack.nodes.Port + kubernetes_node_host: + type: cloudify.openstack.nodes.Server properties: openstack_config: *openstack_config + agent_config: + user: { get_input: agent_user } + install_method: remote + port: 22 + key: { get_secret: agent_key_private } + server: + key_name: '' + image: '' + flavor: '' + management_network_name: { get_property: [ private_network, resource_id ] } relationships: - type: cloudify.relationships.contained_in - target: public_network - - type: cloudify.relationships.depends_on - target: public_subnet - - type: cloudify.openstack.port_connected_to_security_group - target: kubernetes_security_group - - type: cloudify.openstack.port_connected_to_floating_ip - target: kubernetes_master_ip - - kubernetes_master_ip: - type: cloudify.openstack.nodes.FloatingIP - properties: - openstack_config: *openstack_config - floatingip: - floating_network_name: { get_property: [ external_network, resource_id ] } + target: k8s_node_scaling_tier + - target: kubernetes_node_port + type: cloudify.openstack.server_connected_to_port + interfaces: + cloudify.interfaces.lifecycle: + create: + inputs: + args: + image: { get_input: image } + flavor: { get_input: flavor } + userdata: { get_attribute: [ cloudify_host_cloud_config, cloud_config ] } + cloudify.interfaces.monitoring_agent: + install: + implementation: diamond.diamond_agent.tasks.install + inputs: + diamond_config: + interval: 1 + start: diamond.diamond_agent.tasks.start + stop: diamond.diamond_agent.tasks.stop + uninstall: diamond.diamond_agent.tasks.uninstall + cloudify.interfaces.monitoring: + start: + implementation: diamond.diamond_agent.tasks.add_collectors + inputs: + collectors_config: + CPUCollector: {} + MemoryCollector: {} + LoadAverageCollector: {} + DiskUsageCollector: + config: + devices: x?vd[a-z]+[0-9]*$ + NetworkCollector: {} + ProcessResourcesCollector: + config: + enabled: true + unit: B + measure_collector_time: true + cpu_interval: 0.5 + process: + hyperkube: + name: hyperkube kubernetes_security_group: type: cloudify.openstack.nodes.SecurityGroup @@ -166,46 +160,19 @@ node_templates: - remote_ip_prefix: 0.0.0.0/0 port: 10250 - kubernetes_node_host: - # A virtual machine that will get a Kubernetes node installed on it. - type: cloudify.openstack.nodes.Server + kubernetes_master_port: + type: cloudify.openstack.nodes.Port properties: openstack_config: *openstack_config - agent_config: - install_method: remote - user: { get_input: agent_user } - min_workers: 2 - key: { get_property: [ key, private_key_path ] } - server: - image: {get_input: image} - flavor: {get_input: flavor} - userdata: | - #!/bin/bash - sudo groupadd docker - sudo gpasswd -a ubuntu docker - management_network_name: { get_property: [ private_network, resource_id ] } relationships: - type: cloudify.relationships.contained_in - target: k8s_node_scaling_tier - - target: kubernetes_node_port - type: cloudify.openstack.server_connected_to_port - - target: key - type: cloudify.openstack.server_connected_to_keypair - interfaces: - cloudify.interfaces.monitoring_agent: - install: - implementation: diamond.diamond_agent.tasks.install - inputs: - diamond_config: - interval: 1 - start: diamond.diamond_agent.tasks.start - stop: diamond.diamond_agent.tasks.stop - uninstall: diamond.diamond_agent.tasks.uninstall - cloudify.interfaces.monitoring: - start: - implementation: diamond.diamond_agent.tasks.add_collectors - inputs: - <<: *hyperkube_monitoring + target: public_network + - type: cloudify.relationships.depends_on + target: public_subnet + - type: cloudify.openstack.port_connected_to_security_group + target: kubernetes_security_group + - type: cloudify.openstack.port_connected_to_floating_ip + target: kubernetes_master_ip kubernetes_node_port: type: cloudify.openstack.nodes.Port @@ -226,7 +193,7 @@ node_templates: properties: openstack_config: *openstack_config use_external_resource: true - resource_id: { get_input: private_subnet_name } + resource_id: { get_secret: private_subnet_name } relationships: - target: private_network type: cloudify.relationships.contained_in @@ -236,14 +203,14 @@ node_templates: properties: openstack_config: *openstack_config use_external_resource: true - resource_id: { get_input: private_network_name } + resource_id: { get_secret: private_network_name } public_subnet: type: cloudify.openstack.nodes.Subnet properties: openstack_config: *openstack_config use_external_resource: true - resource_id: { get_input: public_subnet_name } + resource_id: { get_secret: public_subnet_name } relationships: - target: public_network type: cloudify.relationships.contained_in @@ -255,14 +222,14 @@ node_templates: properties: openstack_config: *openstack_config use_external_resource: true - resource_id: { get_input: public_network_name } + resource_id: { get_secret: public_network_name } router: type: cloudify.openstack.nodes.Router properties: openstack_config: *openstack_config use_external_resource: true - resource_id: { get_input: router_name } + resource_id: { get_secret: router_name } relationships: - target: external_network type: cloudify.relationships.connected_to @@ -272,18 +239,21 @@ node_templates: properties: openstack_config: *openstack_config use_external_resource: true - resource_id: { get_input: external_network_name } - - key: - type: cloudify.openstack.nodes.KeyPair - properties: - openstack_config: *openstack_config - resource_id: { get_input: key_name } - private_key_path: { get_input: private_key_path } + resource_id: { get_secret: external_network_name } + relationships: + - type: cloudify.relationships.depends_on + target: cloudify_host_cloud_config k8s_node_scaling_tier: type: cloudify.nodes.Root + kubernetes_master_ip: + type: cloudify.openstack.nodes.FloatingIP + properties: + openstack_config: *openstack_config + floatingip: + floating_network_name: { get_property: [ external_network, resource_id ] } + groups: k8s_node_scale_group: @@ -291,83 +261,6 @@ groups: - kubernetes_node_host - kubernetes_node_port - scale_up_group: - members: [kubernetes_node_host] - # This defines a scale group whose members may be scaled up, incrementing by 1. - # The scale worflow is called when the following criteria are met - # The Hyperkube process total CPU will be more than 3 for a total of 10 seconds. - # No more than 6 hosts will be allowed. - policies: - auto_scale_up: - type: scale_policy_type - properties: - policy_operates_on_group: true - scale_limit: 6 - scale_direction: '<' - scale_threshold: 30 - #service_selector: .*kubernetes_node_host.*.cpu.total.user - service_selector: .*kubernetes_node_host.*cpu.total.user - cooldown_time: 60 - triggers: - execute_scale_workflow: - type: cloudify.policies.triggers.execute_workflow - parameters: - workflow: scale - workflow_parameters: - delta: 1 - scalable_entity_name: kubernetes_node - scale_compute: true - - scale_down_group: - members: [kubernetes_node_host] - # This defines a scale group whose members may be scaled up, incrementing by 1. - # The scale worflow is called when the following criteria are met - # The Hyperkube process total CPU will be more than 3 for a total of 10 seconds. - # No more than 6 hosts will be allowed. - policies: - auto_scale_down: - type: scale_policy_type - properties: - policy_operates_on_group: true - scale_limit: 6 - scale_direction: '<' - scale_threshold: 30 - #service_selector: .*kubernetes_node_host.*.cpu.total.user - service_selector: .*kubernetes_node_host.*cpu.total.user - cooldown_time: 60 - triggers: - execute_scale_workflow: - type: cloudify.policies.triggers.execute_workflow - parameters: - workflow: scale - workflow_parameters: - delta: 1 - scalable_entity_name: kubernetes_node - scale_compute: true - - heal_group: - # This defines a group of hosts in members that may be healed. - # The heal workflow is called when a the following policy criteria are met. - # Either the hyperkube process on the host, or the total host CPU need fall silent. - # The host and all software that it is supposed to have running on it will be healed. - members: [kubernetes_node_host] - policies: - simple_autoheal_policy: - type: cloudify.policies.types.host_failure - properties: - service: - - .*kubernetes_node_host.*.cpu.total.system - - .*kubernetes_node_host.*.process.hyperkube.cpu.percent - interval_between_workflows: 60 - triggers: - auto_heal_trigger: - type: cloudify.policies.triggers.execute_workflow - parameters: - workflow: heal - workflow_parameters: - node_instance_id: { 'get_property': [ SELF, node_id ] } - diagnose_value: { 'get_property': [ SELF, diagnose ] } - policies: kubernetes_node_vms_scaling_policy: @@ -377,6 +270,7 @@ policies: targets: [k8s_node_scale_group] outputs: + kubernetes_info: description: Kubernetes Dashboard URL value: diff --git a/plugins/cloudify-kubernetes-plugin b/plugins/cloudify-kubernetes-plugin deleted file mode 160000 index aaffdb8..0000000 --- a/plugins/cloudify-kubernetes-plugin +++ /dev/null @@ -1 +0,0 @@ -Subproject commit aaffdb8638dd578b80d0568465ea012641173e94 diff --git a/plugins/cloudify-proxy-plugin/.gitignore b/plugins/cloudify-proxy-plugin/.gitignore deleted file mode 100644 index 3a7778b..0000000 --- a/plugins/cloudify-proxy-plugin/.gitignore +++ /dev/null @@ -1,68 +0,0 @@ -conf/nohup.out - -# Byte-compiled / optimized / DLL files -__pycache__/ -*.py[cod] - -# C extensions -*.so - -# Distribution / packaging -.Python -env/ -build/ -develop-eggs/ -dist/ -downloads/ -eggs/ -lib/ -lib64/ -parts/ -sdist/ -var/ -*.egg-info/ -.installed.cfg -*.egg - -# PyInstaller -# Usually these files are written by a python script from a template -# before PyInstaller builds the exe, so as to inject date/other infos into it. -*.manifest -*.spec - -# Installer logs -pip-log.txt -pip-delete-this-directory.txt - -# Unit test / coverage reports -htmlcov/ -cover -.tox/ -.coverage -.cache -nosetests.xml -coverage.xml - -# testing sqlite db's -*.db - -# Translations -*.mo -*.pot - -# Django stuff: -*.log - -# Sphinx documentation -docs/_build/ - -# PyBuilder -target/ - -.idea/* -.venv/* - -AUTHORS -ChangeLog -.cloudify -local-storage/ diff --git a/plugins/cloudify-proxy-plugin/LICENSE b/plugins/cloudify-proxy-plugin/LICENSE deleted file mode 100644 index f433b1a..0000000 --- a/plugins/cloudify-proxy-plugin/LICENSE +++ /dev/null @@ -1,177 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS diff --git a/plugins/cloudify-proxy-plugin/README.rst b/plugins/cloudify-proxy-plugin/README.rst deleted file mode 100644 index 531c69d..0000000 --- a/plugins/cloudify-proxy-plugin/README.rst +++ /dev/null @@ -1,156 +0,0 @@ -======== -Overview -======== - -The deployment proxy plugin connects two deployments in order to allow deployment coordination. -The source blueprint that wishes to depend on another blueprint, -for example a web tier that wants to depend on a database, includes the cloudify.nodes. -DeploymentProxy node in the blueprint and creates a depends-on or other relationship with it. -The DeploymentProxy node waits until deployment will be in terminated state. - -=============== -Node properties -=============== - -The DeploymentProxy node itself has the following properties that govern it's behavior:: - - - deployment_id : the deployment to depend on - - inherit_outputs : a list of outputs that are should be inherited from depployment proxy outputs. - Default: empty list. - - inherit_inputs : Flag that indicated if it is necessary to inherit deployment inputs - - timeout : number of seconds to wait. When timeout expires, a "RecoverableError" is thrown. - Default=30. - -The BlueprintDeployment node has the following properties:: - - - blueprint_id : blueprint ID to create deployment from - - inputs : inputs for the deployment - - ignore_live_nodes_on_delete : ignore live nodes during deletion for a deployment - -How it works? Let's take a look at multi-part Nodecellar blueprint nodes:: - - mongodb_host_deployment: - type: cloudify.nodes.BlueprintDeployment - properties: - blueprint_id: { get_input: mongodb_host_blueprint_id } - inputs: - vcloud_username: { get_input: vcloud_username } - vcloud_password: { get_input: vcloud_password } - vcloud_token: { get_input: vcloud_token } - vcloud_url: { get_input: vcloud_url } - vcloud_service: { get_input: vcloud_service } - vcloud_service_type: { get_input: vcloud_service_type } - vcloud_instance: { get_input: vcloud_instance } - vcloud_api_version: { get_input: vcloud_api_version } - mongo_ssh: { get_input: mongo_ssh } - vcloud_org_url: { get_input: vcloud_org_url } - vcloud_org: { get_input: vcloud_org } - vcloud_vdc: { get_input: vcloud_vdc } - catalog: { get_input: catalog} - template: { get_input: template } - server_cpu: { get_input: server_cpu } - server_memory: { get_input: server_memory } - network_use_existing: { get_input: network_use_existing } - common_network_name: { get_input: common_network_name } - mongo_ip_address: { get_input: mongo_ip_address } - common_network_public_nat_use_existing: { get_input: common_network_public_nat_use_existing } - edge_gateway: { get_input: edge_gateway } - server_user: { get_input: server_user } - user_public_key: { get_input: user_public_key } - user_private_key: { get_input: user_private_key } - -This node has specific implementation of the lifecycle:: - - On create: Creates a deployment with given inputs - On start: Installs a deployment - On stop: Uninstalls a deployment - On delete: Deletes a deployment - -Given node has runtime property:: - - deployment_id - -it represents a deployment id of newly create deployment instance inside Cloudify. - -Next node consumes that deployment id as an input for next blueprint deployment:: - - mongodb_application_deployment: - type: cloudify.nodes.BlueprintDeployment - properties: - blueprint_id: { get_input: mongodb_application_blueprint_id } - cloudify.interfaces.lifecycle: - create: - inputs: - deployment_inputs: - mongodb_host_deployment_id: { get_attribute: [ mongodb_host_deployment, deployment_id ]} - relationships: - - target: mongodb_host_deployment - type: cloudify.relationships.depends_on - -In given case it was decided to split VM and networking provisioning into one blueprint with defined outputs. -Next blueprint describes software installation within Fabric plugin. - -============= -Usage example -============= - -First of all please take a look at samples folder to see blueprints examples. -In most cases it is necessary to get deployment outputs in runtime during installing another deployment. -In case of Nodecellar example, as user i want to attach MongoDB to NodeJS application, MongoDB is available within other deployment. -As user i'd like to chain deployments within proxy pattern - define a deployment proxy node template and consume its attributes within blueprint. -Here's how proxy object looks like:: - - mongodb_proxy_deployment: - type: cloudify.nodes.DeploymentProxy - properties: - deployment_id: { get_input: mongodb_deployment_id } - inherit_inputs: True - inherit_outputs: - - 'mongodb_internal_ip' - - 'mongodb_public_ip' - - -Within NodeJS example blueprint composers are able to access proxy deployment attributes -within TOSCA functions in the next manner:: - - MONGO_HOST: { get_attribute: [ mongodb_proxy_deployment, mongodb_internal_ip ] } - -If it is necessary to access proxy deployment outputs it is possible to do in the next manner:: - - network_name: { get_attribute: [ mongodb_proxy_deployment, proxy_deployment_inputs, common_network_name ] } - - - -NOTE!! get_property function of TOSCA doesn't work with node properties. - -========== -Disclaimer -========== - -Tested on:: - - Cloudify 3.2.1 - - -Available blueprints:: - - vCloud Air Nodecellar multi-blueprint application - -Operating system:: - - Given code OS-agnostic - -========================================== -How to run multi-part Nodecellar blueprint -========================================== - -In order to test multi-part blueprint deployment you have to execute next operations:: - - upload blueprint vcloud-mongodb-host-nodecellar-multipart-blueprint.yaml - upload blueprint vcloud-mongodb-application-nodecellar-multipart-blueprint.yaml - upload blueprint vcloud-nodejs-host-nodecellar-multipart-blueprint.yaml - upload blueprint vcloud-nodejs-application-nodecellar-multipart-blueprint.yaml - upload blueprint vcloud-nodecellar-multipart-blueprint.yaml - create a deployment for blueprint vcloud-nodecellar-multipart-blueprint.yaml - run installation for deployment of the blueprint vcloud-nodecellar-multipart-blueprint.yaml - diff --git a/plugins/cloudify-proxy-plugin/blueprints/tasks.py b/plugins/cloudify-proxy-plugin/blueprints/tasks.py deleted file mode 100644 index a2b4ad8..0000000 --- a/plugins/cloudify-proxy-plugin/blueprints/tasks.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) 2015 GigaSpaces Technologies Ltd. All rights reserved -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# * See the License for the specific language governing permissions and -# * limitations under the License. - -import uuid - -import proxy_common - -from cloudify import ctx -from cloudify import exceptions -from cloudify import manager - -from cloudify.decorators import operation - - -@operation -def create_validation(**kwargs): - ctx.logger.info("Entering create_validation event.") - client = manager.get_rest_client() - blueprint_id = ctx.node.properties['blueprint_id'] - use_existing_deployment = ctx.node.properties[ - 'use_existing_deployment'] - if not use_existing_deployment: - if not blueprint_id or blueprint_id == '': - ctx.logger.error("Malformed blueprint ID.") - raise exceptions.NonRecoverableError( - "Blueprint ID is not specified.") - try: - client.blueprints.get(blueprint_id) - ctx.logger.info("Success, blueprint exists.") - except Exception as ex: - ctx.logger.error("Error during obtaining blueprint {0}. " - "Reason: {1}." - .format(blueprint_id, str(ex))) - raise exceptions.NonRecoverableError( - "Error during obtaining blueprint {0}. " - "Reason: {1}.".format(blueprint_id, str(ex))) - - ctx.logger.info("Exiting create_validation event.") - - -@operation -def create_deployment(deployment_inputs=None, **kwargs): - ctx.logger.info("Entering create_deployment event.") - client = manager.get_rest_client() - blueprint_id = ctx.node.properties['blueprint_id'] - ctx.logger.info("Blueprint ID: %s" % blueprint_id) - deployment_id = "{0}-{1}".format(blueprint_id, - str(uuid.uuid4())) - use_existing_deployment = ctx.node.properties['use_existing_deployment'] - existing_deployment_id = ctx.node.properties['existing_deployment_id'] - try: - if not use_existing_deployment: - ctx.logger.info("deployment ID to create: %s" % deployment_id) - deployment = client.deployments.create( - blueprint_id, - deployment_id, - inputs=deployment_inputs) - ctx.logger.info("Deployment object {0}." - .format(str(deployment))) - else: - client.deployments.get(existing_deployment_id) - deployment_id = existing_deployment_id - ctx.logger.info("Instance runtime properties %s" - % str(ctx.instance.runtime_properties)) - proxy_common.poll_until_with_timeout( - proxy_common.check_if_deployment_is_ready( - client, deployment_id), - expected_result=True, - timeout=900) - ctx.instance.runtime_properties.update( - {'deployment_id': deployment_id}) - except Exception as ex: - ctx.logger.error(str(ex)) - raise exceptions.NonRecoverableError(str(ex)) - - ctx.logger.info("Exiting create_validation event.") - - -@operation -def delete_deployment(**kwargs): - ctx.logger.info("Entering delete_deployment event.") - - if 'deployment_id' not in ctx.instance.runtime_properties: - raise exceptions.NonRecoverableError( - "Deployment ID as runtime property not specified.") - - client = manager.get_rest_client() - deployment_id = ctx.instance.runtime_properties[ - 'deployment_id'] - ignore = ctx.node.properties['ignore_live_nodes_on_delete'] - try: - proxy_common.poll_until_with_timeout( - proxy_common.check_if_deployment_is_ready( - client, deployment_id), - expected_result=True, - timeout=900) - client.deployments.delete(deployment_id, - ignore_live_nodes=ignore) - except Exception as ex: - ctx.logger.error("Error during deployment deletion {0}. " - "Reason: {1}." - .format(deployment_id, str(ex))) - raise exceptions.NonRecoverableError( - "Error during deployment uninstall {0}. " - "Reason: {1}.".format(deployment_id, str(ex))) - ctx.logger.info("Exiting delete_deployment event.") diff --git a/plugins/cloudify-proxy-plugin/deployments/__init__.py b/plugins/cloudify-proxy-plugin/deployments/__init__.py deleted file mode 100644 index e69de29..0000000 diff --git a/plugins/cloudify-proxy-plugin/deployments/tasks.py b/plugins/cloudify-proxy-plugin/deployments/tasks.py deleted file mode 100644 index dd29755..0000000 --- a/plugins/cloudify-proxy-plugin/deployments/tasks.py +++ /dev/null @@ -1,169 +0,0 @@ -# Copyright (c) 2015 GigaSpaces Technologies Ltd. All rights reserved -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# * See the License for the specific language governing permissions and -# * limitations under the License. - -import sys - -import proxy_common - -from cloudify import ctx -from cloudify import exceptions -from cloudify import manager - -from cloudify.decorators import operation - - -@operation -def create_validation(**kwargs): - ctx.logger.info("Entering create_validation event.") - client = manager.get_rest_client() - deployment_id = ctx.node.properties['deployment_id'] - if not deployment_id or deployment_id == '': - ctx.logger.error("Malformed deployment ID.") - raise exceptions.NonRecoverableError( - "Deployment ID is not specified.") - try: - client.deployments.get(deployment_id) - ctx.logger.info("Success, deployment exists.") - except Exception as ex: - ctx.logger.error("Error during obtaining deployment {0}. " - "Reason: {1}." - .format(deployment_id, str(ex))) - raise exceptions.NonRecoverableError( - "Error during obtaining deployment {0}. " - "Reason: {1}.".format(deployment_id, str(ex))) - ctx.logger.info("Exiting create_validation event.") - - -@operation -def wait_for_deployment(deployment_id, **kwargs): - ctx.logger.info("Entering wait_for_deployment event.") - ctx.logger.info("Using deployment %s" % deployment_id) - if not deployment_id: - raise exceptions.NonRecoverableError( - "Deployment ID not specified.") - - client = manager.get_rest_client() - timeout = ctx.node.properties['timeout'] - proxy_common.poll_until_with_timeout( - proxy_common.check_if_deployment_is_ready( - client, deployment_id), - expected_result=True, - timeout=timeout) - - ctx.logger.info("Exiting wait_for_deployment event.") - - -@operation -def inherit_deployment_attributes(deployment_id, **kwargs): - ctx.logger.info("Entering obtain_outputs event.") - client = manager.get_rest_client() - outputs = ctx.node.properties['inherit_outputs'] - ctx.logger.info("Outputs to inherit: {0}." - .format(str(outputs))) - ctx.logger.info('deployment id %s' % deployment_id) - inherit_inputs = ctx.node.properties['inherit_inputs'] - ctx.instance.runtime_properties.update({ - 'inherit_outputs': outputs, - 'deployment_id': deployment_id - }) - try: - if inherit_inputs: - _inputs = client.deployments.get(deployment_id)['inputs'] - ctx.instance.runtime_properties.update( - {'proxy_deployment_inputs': _inputs}) - deployment_outputs = client.deployments.outputs.get( - deployment_id)['outputs'] - ctx.logger.info("Available deployment outputs {0}." - .format(str(deployment_outputs))) - ctx.logger.info("Available runtime properties: {0}.".format( - str(ctx.instance.runtime_properties.keys()) - )) - for key in outputs: - ctx.instance.runtime_properties.update( - {key: deployment_outputs.get(key)} - ) - except Exception as ex: - ctx.logger.error( - "Caught exception during obtaining " - "deployment outputs {0} {1}" - .format(sys.exc_info()[0], str(ex))) - raise exceptions.NonRecoverableError( - "Caught exception during obtaining " - "deployment outputs {0} {1}. Available runtime properties {2}" - .format(sys.exc_info()[0], str(ex), - str(ctx.instance.runtime_properties.keys()))) - ctx.logger.info("Exiting obtain_outputs event.") - - -@operation -def cleanup(**kwargs): - ctx.logger.info("Entering cleanup_outputs event.") - outputs = ctx.instance.runtime_properties.get('inherit_outputs', []) - if ('proxy_deployment_inputs' in - ctx.instance.runtime_properties): - del ctx.instance.runtime_properties['proxy_deployment_inputs'] - for key in outputs: - if key in ctx.instance.runtime_properties: - del ctx.instance.runtime_properties[key] - ctx.logger.info("Exiting cleanup_outputs event.") - - -@operation -def install_deployment(**kwargs): - ctx.logger.info("Entering install_deployment event.") - if 'deployment_id' not in ctx.instance.runtime_properties: - raise exceptions.NonRecoverableError( - "Deployment ID as runtime property not specified.") - - client = manager.get_rest_client() - deployment_id = ctx.instance.runtime_properties[ - 'deployment_id'] - proxy_common.poll_until_with_timeout( - proxy_common.check_if_deployment_is_ready( - client, deployment_id), - expected_result=True, - timeout=900) - - if not ctx.node.properties['use_existing_deployment']: - proxy_common.execute_workflow(deployment_id, - 'install') - - ctx.instance.runtime_properties[ - 'outputs'] = (client.deployments.get( - deployment_id).outputs) - ctx.logger.info("Exiting install_deployment event.") - - -@operation -def uninstall_deployment(**kwargs): - ctx.logger.info("Entering uninstall_deployment event.") - if 'deployment_id' not in ctx.instance.runtime_properties: - raise exceptions.NonRecoverableError( - "Deployment ID as runtime property not specified.") - - deployment_id = ctx.instance.runtime_properties[ - 'deployment_id'] - if not ctx.node.properties['use_existing_deployment']: - proxy_common.execute_workflow(deployment_id, - 'uninstall') - - ctx.logger.info("Exiting uninstall_deployment event.") - -@operation -def get_outputs(**kwargs): -# if (ctx.target.node._node.type!='cloudify.nodes.DeploymentProxy'): -# raise (NonRecoverableError('invalid target: must connect to DeploymentProxy type')) - - for output in ctx.target.node.properties['inherit_outputs']: - ctx.source.instance.runtime_properties[output]=ctx.target.instance.runtime_properties[output] diff --git a/plugins/cloudify-proxy-plugin/dev-requirements.txt b/plugins/cloudify-proxy-plugin/dev-requirements.txt deleted file mode 100644 index 696c40f..0000000 --- a/plugins/cloudify-proxy-plugin/dev-requirements.txt +++ /dev/null @@ -1,3 +0,0 @@ --e git+https://github.com/cloudify-cosmo/cloudify-dsl-parser@master#egg=cloudify-dsl-parser==3.3a6 --e git+https://github.com/cloudify-cosmo/cloudify-rest-client@master#egg=cloudify-rest-client==3.3a6 --e git+https://github.com/cloudify-cosmo/cloudify-plugins-common@master#egg=cloudify-plugins-common==3.3a6 diff --git a/plugins/cloudify-proxy-plugin/plugin.yaml b/plugins/cloudify-proxy-plugin/plugin.yaml deleted file mode 100644 index a955138..0000000 --- a/plugins/cloudify-proxy-plugin/plugin.yaml +++ /dev/null @@ -1,71 +0,0 @@ -plugins: - proxy: - executor: central_deployment_agent - source: cloudify-proxy-plugin - -node_types: - - cloudify.nodes.DeploymentProxy: - derived_from: cloudify.nodes.Root - properties: - inherit_outputs: - default: [] - description: A list of proxy deployment outputs to inherit. - timeout: - default: 30 - description: The time to wait for deployment executions to finish. - inherit_inputs: - default: False - description: Flag that indicated if it is necessary to inherit deployment inputs. - interfaces: - cloudify.interfaces.lifecycle: - create: - implementation: proxy.deployments.tasks.wait_for_deployment - start: - implementation: proxy.deployments.tasks.inherit_deployment_attributes - stop: - implementation: proxy.deployments.tasks.cleanup - cloudify.interfaces.validation: - creation: - implementation: proxy.deployments.tasks.create_validation - - cloudify.nodes.BlueprintDeployment: - derived_from: cloudify.nodes.Root - properties: - blueprint_id: - default: '' - description: blueprint ID to work with - inputs: - default: {} - description: blueprint deployment inputs - ignore_live_nodes_on_delete: - default: False - description: Ignore live nodes while deleting a deployment - use_existing_deployment: - default: False - description: Use external deployment ID - existing_deployment_id: - default: '' - description: Existing deployment ID - interfaces: - cloudify.interfaces.lifecycle: - create: - implementation: proxy.blueprints.tasks.create_deployment - start: - implementation: proxy.deployments.tasks.install_deployment - stop: - implementation: proxy.deployments.tasks.uninstall_deployment - delete: - implementation: proxy.blueprints.tasks.delete_deployment - cloudify.interfaces.validation: - creation: - implementation: proxy.blueprints.tasks.create_validation - -relationships: - cloudify.relationships.connected_to_proxy: - derived_from: cloudify.relationships.connected_to - source_interfaces: - cloudify.interfaces.relationship_lifecycle: - postconfigure: - implementation: proxy.deployments.tasks.get_outputs - diff --git a/plugins/cloudify-proxy-plugin/proxy_common/__init__.py b/plugins/cloudify-proxy-plugin/proxy_common/__init__.py deleted file mode 100644 index cd285da..0000000 --- a/plugins/cloudify-proxy-plugin/proxy_common/__init__.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) 2015 GigaSpaces Technologies Ltd. All rights reserved -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# * See the License for the specific language governing permissions and -# * limitations under the License. - -import time - -from cloudify import ctx -from cloudify import exceptions -from cloudify import manager - - -def poll_until_with_timeout(pollster, expected_result=None, - sleep_time=5, timeout=30): - ctx.logger.info("Entering poll_until_with_timeout") - ctx.logger.info(pollster) - if not callable(pollster): - raise exceptions.NonRecoverableError( - "%s is not callable" % pollster.__name__) - while time.time() <= time.time() + timeout: - if pollster() != expected_result: - time.sleep(sleep_time) - else: - return True - raise exceptions.NonRecoverableError("Timed out waiting for deployment " - "to reach appropriate state.") - - -def get_latest_workflow(items, columns): - from operator import itemgetter - comps = [((itemgetter(col[1:].strip()), -1) - if col.startswith('-') else - (itemgetter(col.strip()), 1)) - for col in columns] - - def comparator(left, right): - for fn, m in comps: - result = cmp(fn(left), fn(right)) - if result: - return m * result - else: - return 0 - executions = sorted(items, cmp=comparator) - return executions[0]['workflow_id'], executions[0]['status'] - - -def is_installed(client, deployment_id): - _execs = client.executions.list( - deployment_id=deployment_id) - ctx.logger.info("Deployment executions statuses: {0}.".format( - str([[_e['workflow_id'], - _e['status']] for _e in _execs]) - )) - for e in _execs: - e['created_at'] = time.strptime(e['created_at'][:-7], - '%Y-%m-%d %H:%M:%S') - - latest_workflow, status = get_latest_workflow( - _execs, ['created_at']) - return 'install' in latest_workflow and 'terminated' in status - - -def check_if_deployment_is_ready(client, deployment_id): - - def _poll(): - return True -# _execs = client.executions.list( -# deployment_id=deployment_id) -# ctx.logger.info("Deployment executions statuses: {0}.".format( -# str([[_e['workflow_id'], -# _e['status']] for _e in _execs]) -# )) -# ctx.logger.info("Are all executions were finished? {0}".format( -# [str(_e['status']) == "terminated" for _e in _execs])) -# ctx.logger.info(any([str(_e['status']) == -# "terminated" for _e in _execs])) -# return all([str(_e['status']) == "terminated" for _e in _execs]) - - return _poll - - -def execute_workflow(deployment_id, workflow_id): - ctx.logger.info("Entering execute_workflow event.") - try: - client = manager.get_rest_client() - client.executions.start(deployment_id, - workflow_id) - ctx.logger.info("Workflow {0} started.".format( - workflow_id)) - poll_until_with_timeout( - check_if_deployment_is_ready( - client, deployment_id), - expected_result=True, - timeout=900) - except Exception as ex: - ctx.logger.error("Error during deployment uninstall {0}. " - "Reason: {1}." - .format(deployment_id, str(ex))) - raise exceptions.NonRecoverableError( - "Error during deployment uninstall {0}. " - "Reason: {1}.".format(deployment_id, str(ex))) - ctx.logger.info("Exiting execute_workflow event.") diff --git a/plugins/cloudify-proxy-plugin/setup.py b/plugins/cloudify-proxy-plugin/setup.py deleted file mode 100644 index c7fa55f..0000000 --- a/plugins/cloudify-proxy-plugin/setup.py +++ /dev/null @@ -1,48 +0,0 @@ -# Copyright (c) 2015 GigaSpaces Technologies Ltd. All rights reserved -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# * See the License for the specific language governing permissions and -# * limitations under the License. - -import os -import setuptools - - -def read(fname): - return open(os.path.join(os.path.dirname(__file__), fname)).read() - -with open('test-requirements.txt') as f: - test_required = f.read().splitlines() - - -setuptools.setup( - - # Do not use underscores in the plugin name. - name='cloudify-proxy-plugin', - version='0.1', - author='Gigaspaces.com', - author_email='Gigaspaces.com', - description='plugin that defines dependencies between deployments', - - # This must correspond to the actual packages in the plugin. - packages=[ - 'deployments', - 'blueprints', - 'proxy_common', - ], - - license='LICENSE', - install_requires=[ - 'cloudify-plugins-common==3.3.1', - 'requests==2.8.0' - ], - test_requires=test_required, -) diff --git a/plugins/cloudify-proxy-plugin/test-requirements.txt b/plugins/cloudify-proxy-plugin/test-requirements.txt deleted file mode 100644 index ee52f21..0000000 --- a/plugins/cloudify-proxy-plugin/test-requirements.txt +++ /dev/null @@ -1,6 +0,0 @@ -hacking>=0.10.0,<0.11 -mock>=1.0 -nose>=1.3 -coverage -testtools>=0.9.36,!=1.2.0 -tox diff --git a/plugins/cloudify-proxy-plugin/tox.ini b/plugins/cloudify-proxy-plugin/tox.ini deleted file mode 100644 index 6f02df1..0000000 --- a/plugins/cloudify-proxy-plugin/tox.ini +++ /dev/null @@ -1,24 +0,0 @@ -[tox] -envlist = py27, pep8 -minversion = 1.6 -skipsdist = True - -[testenv] -passenv = -setenv = VIRTUAL_ENV={envdir} -usedevelop = True -install_command = pip install -U {opts} {packages} -deps = -r{toxinidir}/dev-requirements.txt - -r{toxinidir}/test-requirements.txt -whitelist_externals = bash - -[testenv:pep8] -commands = - flake8 - - -[flake8] -show-source = True -ignore = H103 -exclude=.venv,.tox,dist,*egg,etc,build, -filename=*.py diff --git a/resources/pod.yaml b/resources/pod.yaml deleted file mode 100644 index 74ec41f..0000000 --- a/resources/pod.yaml +++ /dev/null @@ -1,20 +0,0 @@ -apiVersion: v1 -kind: ReplicationController -metadata: - name: nginx -spec: - replicas: 3 - selector: - app: nginx - template: - metadata: - name: nginx - labels: - app: nginx - spec: - containers: - - name: nginx - image: nginx - ports: - - containerPort: 80 - diff --git a/resources/service.yaml b/resources/service.yaml deleted file mode 100644 index e65d2f8..0000000 --- a/resources/service.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - labels: - name: nginx - name: nginx -spec: - type: NodePort - ports: - - port: 8000 - targetPort: 80 - nodePort: 30000 - protocol: TCP - selector: - app: nginx - diff --git a/scripts/cloud_config/create.py b/scripts/cloud_config/create.py new file mode 100644 index 0000000..c9051f7 --- /dev/null +++ b/scripts/cloud_config/create.py @@ -0,0 +1,28 @@ +#!/usr/bin/env python + +try: + import yaml +except ImportError: + import pip + pip.main(['install', 'pyyaml']) + import yaml + +import base64 +from cloudify import ctx +from cloudify.state import ctx_parameters as inputs + + +if __name__ == '__main__': + + cloud_config = inputs['cloud_config'] + ctx.logger.debug('cloud_config: {0}'.format(cloud_config)) + cloud_config_yaml = yaml.dump(cloud_config) + cloud_config_string = str(cloud_config_yaml).replace('!!python/unicode ', '') + cloud_config_string = '#cloud-config\n' + cloud_config_string + ctx.logger.debug('cloud_config_string: {0}'.format(cloud_config_string)) + + if ctx.node.properties['resource_config'].get('encode_base64'): + cloud_config_string = base64.encodestring(cloud_config_string) + ctx.logger.debug('cloud_config_string: {0}'.format(cloud_config_string)) + + ctx.instance.runtime_properties['cloud_config'] = cloud_config_string diff --git a/scripts/configure_master.py b/scripts/configure_master.py new file mode 100644 index 0000000..95278d1 --- /dev/null +++ b/scripts/configure_master.py @@ -0,0 +1,58 @@ +#!/usr/bin/env python + +import pwd +import grp +import os +import getpass +import subprocess +from cloudify import ctx + + +def execute_command(_command): + + ctx.logger.debug('_command {0}.'.format(_command)) + + subprocess_args = { + 'args': _command.split(), + 'stdout': subprocess.PIPE, + 'stderr': subprocess.PIPE + } + + ctx.logger.debug('subprocess_args {0}.'.format(subprocess_args)) + + process = subprocess.Popen(**subprocess_args) + output, error = process.communicate() + + ctx.logger.debug('command: {0} '.format(_command)) + ctx.logger.debug('output: {0} '.format(output)) + ctx.logger.debug('error: {0} '.format(error)) + ctx.logger.debug('process.returncode: {0} '.format(process.returncode)) + + if process.returncode: + ctx.logger.error('Running `{0}` returns error.'.format(_command)) + return False + + return output + + +if __name__ == '__main__': + + # Start the Kube Master + start_output = execute_command('sudo kubeadm init --skip-preflight-checks') + for line in start_output.split('\n'): + if 'kubeadm join' in line: + ctx.instance.runtime_properties['join_command'] = line.lstrip() + + # Add the kubeadmin config to environment + agent_user = getpass.getuser() + uid = pwd.getpwnam(agent_user).pw_uid + gid = grp.getgrnam('docker').gr_gid + admin_file_dest = os.path.join(os.path.expanduser('~'), 'admin.conf') + + execute_command('sudo cp {0} {1}'.format('/etc/kubernetes/admin.conf', admin_file_dest)) + execute_command('sudo chown {0}:{1} {2}'.format(uid, gid, admin_file_dest)) + + with open(os.path.join(os.path.expanduser('~'), '.bashrc'), 'a') as outfile: + outfile.write('export KUBECONFIG=$HOME/admin.conf') + os.environ['KUBECONFIG'] = admin_file_dest + execute_command('kubectl apply -f https://git.io/weave-kube-1.6') diff --git a/scripts/configure_node.py b/scripts/configure_node.py new file mode 100644 index 0000000..9463763 --- /dev/null +++ b/scripts/configure_node.py @@ -0,0 +1,43 @@ +#!/usr/bin/env python + +import subprocess +from cloudify import ctx + +START_COMMAND = 'sudo kubeadm join --token {0} {1}:{2}' + + +def execute_command(_command): + + ctx.logger.debug('_command {0}.'.format(_command)) + + subprocess_args = { + 'args': _command.split(), + 'stdout': subprocess.PIPE, + 'stderr': subprocess.PIPE + } + + ctx.logger.debug('subprocess_args {0}.'.format(subprocess_args)) + + process = subprocess.Popen(**subprocess_args) + output, error = process.communicate() + + ctx.logger.debug('command: {0} '.format(_command)) + ctx.logger.debug('output: {0} '.format(output)) + ctx.logger.debug('error: {0} '.format(error)) + ctx.logger.debug('process.returncode: {0} '.format(process.returncode)) + + if process.returncode: + ctx.logger.error('Running `{0}` returns error.'.format(_command)) + return False + + return output + + +if __name__ == '__main__': + + masters = \ + [x for x in ctx.instance.relationships if 'cloudify.nodes.Kubernetes.Master' in x.target.node.type_hierarchy] + ctx_master = masters[0] + join_command = ctx_master.target.instance.runtime_properties['join_command'] + join_command = 'sudo {0} --skip-preflight-checks'.format(join_command) + execute_command(join_command) diff --git a/scripts/create.py b/scripts/create.py new file mode 100644 index 0000000..24524e7 --- /dev/null +++ b/scripts/create.py @@ -0,0 +1,36 @@ +#!/usr/bin/env python + +import subprocess +from cloudify import ctx +from cloudify.exceptions import RecoverableError + + +def check_for_docker(): + + command = 'docker ps' + + try: + process = subprocess.Popen( + command.split() + ) + except OSError: + return False + + output, error = process.communicate() + + ctx.logger.debug('command: {0} '.format(command)) + ctx.logger.debug('output: {0} '.format(output)) + ctx.logger.debug('error: {0} '.format(error)) + ctx.logger.debug('process.returncode: {0} '.format(process.returncode)) + + if process.returncode: + ctx.logger.error('Running `{0}` returns error.'.format(command)) + return False + + return True + + +if __name__ == '__main__': + + if not check_for_docker(): + raise RecoverableError('Waiting for docker to be installed.') diff --git a/scripts/docker_install.py b/scripts/docker_install.py deleted file mode 100644 index b9cebea..0000000 --- a/scripts/docker_install.py +++ /dev/null @@ -1,81 +0,0 @@ -#!/usr/bin/env python - -import os -import subprocess -from cloudify import ctx -from cloudify.exceptions import NonRecoverableError -import stat - -work_environment = os.environ.copy() -work_dir = os.path.expanduser("~") - - -def install_docker(script): - ctx.logger.info('Installing Docker.') - process = subprocess.Popen( - ['sudo', 'sh', script], - stdout=open(os.devnull, 'w'), - stderr=subprocess.PIPE - ) - - output, error = process.communicate() - - if process.returncode: - raise NonRecoverableError( - 'Failed to start Docker bootstrap. ' - 'Output: {0}' - 'Error: {1}'.format(output, error) - ) - - return - - -def check_for_docker(): - - command = 'docker ps' - - try: - process = subprocess.Popen( - command.split() - ) - except OSError: - return False - - output, error = process.communicate() - - ctx.logger.debug( - 'Command: {0} ' - 'Command output: {1} ' - 'Command error: {2} ' - 'Return code: {3}'.format(command, output, error, process.returncode)) - - if process.returncode: - ctx.logger.error('Docker PS returncode was negative. ' - 'Risk of bad installation.') - return False - - return True - - -if __name__ == '__main__': - - command = 'sudo apt-get update' - subprocess.Popen( - command.split(), - stdout=open(os.devnull, 'w'), - stderr=open(os.devnull, 'w') - ).wait() - - if not check_for_docker(): - ctx.logger.info('Install Docker.') - path_to_script = os.path.join('scripts/', 'install_docker.sh') - install_script = ctx.download_resource(path_to_script) - st = os.stat(install_script) - os.chmod(install_script, st.st_mode | stat.S_IEXEC) - install_docker(install_script) - - if not check_for_docker(): - raise NonRecoverableError( - 'Failed to install Docker. ' - 'Check debug log for more info. ' - ) diff --git a/scripts/fabric_tasks.py b/scripts/fabric_tasks.py deleted file mode 100644 index a3e44b0..0000000 --- a/scripts/fabric_tasks.py +++ /dev/null @@ -1,292 +0,0 @@ -from fabric.api import run, put -from cloudify.exceptions import NonRecoverableError -from cloudify import ctx -import os - - -def start_master_bmc(**kwargs): - stable = run("curl -s https://storage.googleapis.com" - "/kubernetes-release/release/stable.txt") - run("curl -LO https://storage.googleapis.com" - "/kubernetes-release/release/{}" - "/bin/linux/amd64/kubectl".format(stable)) - run("chmod +x kubectl") - run("sudo setenforce 0") - run("sudo systemctl disable firewalld") - run("sudo systemctl stop firewalld") - f = ctx.download_resource("resources/kube-deploy.tgz") - put(f, "/tmp/kube-deploy.tgz") - os.remove(f) - run("rm -rf kube-deploy") - run("tar xzf /tmp/kube-deploy.tgz") - k8s_version = (kwargs['k8s_settings']['k8s_version'] - if 'k8s_version' in kwargs['k8s_settings'] else 'v1.3.0') - etcd_version = (kwargs['k8s_settings']['etcd_version'] - if 'etcd_version' in kwargs['k8s_settings'] else '2.2.5') - flannel_version = (kwargs['k8s_settings']['flannel_version'] - if 'flannel_version' in kwargs['k8s_settings'] - else 'v0.6.2') - flannel_network = (kwargs['k8s_settings']['flannel_network'] - if 'flannel_network' in kwargs['k8s_settings'] - else '10.1.0.0/16') - flannel_ipmasq = (kwargs['k8s_settings']['flannel_ipmasq'] - if 'flannel_ipmasq' in kwargs['k8s_settings'] - else 'true') - flannel_backend = (kwargs['k8s_settings']['flannel_backend'] - if 'flannel_backend' in kwargs['k8s_settings'] - else 'udp') - restart_policy = (kwargs['k8s_settings']['restart_policy'] - if 'restart_policy' in kwargs['k8s_settings'] - else 'unless-stopped') - arch = (kwargs['k8s_settings']['arch'] - if 'arch' in kwargs['k8s_settings'] else 'amd64') - net_interface = (kwargs['k8s_settings']['net_interface'] - if 'net_interface' in kwargs['k8s_settings'] else 'eth0') - - cluster_args = '' - cluster_args = (cluster_args + "--etcd-name {}".format(kwargs['etcd_name']) - if 'etcd_name' in kwargs else '') - cluster_args = (cluster_args + "--etcd-initial-cluster {}". - format(kwargs['etcd_initial_cluster']) - if 'etcd_initial_cluster' in kwargs else '') - cluster_args = (cluster_args + "--etcd-initial-cluster-state {}". - format(kwargs['etcd_initial_cluster_state']) - if 'etcd_initial_cluster_state' in kwargs else '') - cluster_args = (cluster_args + "--etcd-initial-advertise-peer-urls {}". - format(kwargs['etcd_initial_advertise_peer_urls']) - if 'etcd_initial_advertise_peer_urls' in kwargs else '') - cluster_args = (cluster_args + "--etcd-advertise-client-urls {}". - format(kwargs['etcd_advertise_client_urls']) - if 'etcd_initial_advertise_peer_urls' in kwargs else '') - cluster_args = (cluster_args + "--etcd-listen-peer-urls {}". - format(kwargs['etcd_listen_peer_urls']) - if 'etcd_listen_peer_urls' in kwargs else '') - cluster_args = (cluster_args + "--etcd-listen-client-urls {}". - format(kwargs['etcd_listen_client_urls']) - if 'etcd_listen_client_urls' in kwargs else '') - - run("cd kube-deploy/docker-multinode;sudo ./master.sh" - " --k8s-version {}" - " --etcd-version {}" - " --flannel-version {}" - " --flannel-network {}" - " --flannel-ipmasq {}" - " --flannel-backend {}" - " --restart-policy {}" - " --arch {}" - " --net-interface {}" - " {}".format( - k8s_version, - etcd_version, - flannel_version, - flannel_network, - flannel_ipmasq, - flannel_backend, - restart_policy, - arch, - net_interface, - cluster_args - )) - -def start_master(**kwargs): - stable = run("curl -s https://storage.googleapis.com" - "/kubernetes-release/release/stable.txt") - run("curl -LO https://storage.googleapis.com" - "/kubernetes-release/release/{}" - "/bin/linux/amd64/kubectl".format(stable)) - run("chmod +x kubectl") - f = ctx.download_resource("resources/kube-deploy.tgz") - put(f, "/tmp/kube-deploy.tgz") - os.remove(f) - run("rm -rf kube-deploy") - run("tar xzf /tmp/kube-deploy.tgz") - k8s_version = (kwargs['k8s_settings']['k8s_version'] - if 'k8s_version' in kwargs['k8s_settings'] else 'v1.3.0') - etcd_version = (kwargs['k8s_settings']['etcd_version'] - if 'etcd_version' in kwargs['k8s_settings'] else '2.2.5') - flannel_version = (kwargs['k8s_settings']['flannel_version'] - if 'flannel_version' in kwargs['k8s_settings'] - else 'v0.6.2') - flannel_network = (kwargs['k8s_settings']['flannel_network'] - if 'flannel_network' in kwargs['k8s_settings'] - else '10.1.0.0/16') - flannel_ipmasq = (kwargs['k8s_settings']['flannel_ipmasq'] - if 'flannel_ipmasq' in kwargs['k8s_settings'] - else 'true') - flannel_backend = (kwargs['k8s_settings']['flannel_backend'] - if 'flannel_backend' in kwargs['k8s_settings'] - else 'udp') - restart_policy = (kwargs['k8s_settings']['restart_policy'] - if 'restart_policy' in kwargs['k8s_settings'] - else 'unless-stopped') - arch = (kwargs['k8s_settings']['arch'] - if 'arch' in kwargs['k8s_settings'] else 'amd64') - net_interface = (kwargs['k8s_settings']['net_interface'] - if 'net_interface' in kwargs['k8s_settings'] else 'eth0') - - cluster_args = '' - cluster_args = (cluster_args + "--etcd-name {}".format(kwargs['etcd_name']) - if 'etcd_name' in kwargs else '') - cluster_args = (cluster_args + "--etcd-initial-cluster {}". - format(kwargs['etcd_initial_cluster']) - if 'etcd_initial_cluster' in kwargs else '') - cluster_args = (cluster_args + "--etcd-initial-cluster-state {}". - format(kwargs['etcd_initial_cluster_state']) - if 'etcd_initial_cluster_state' in kwargs else '') - cluster_args = (cluster_args + "--etcd-initial-advertise-peer-urls {}". - format(kwargs['etcd_initial_advertise_peer_urls']) - if 'etcd_initial_advertise_peer_urls' in kwargs else '') - cluster_args = (cluster_args + "--etcd-advertise-client-urls {}". - format(kwargs['etcd_advertise_client_urls']) - if 'etcd_initial_advertise_peer_urls' in kwargs else '') - cluster_args = (cluster_args + "--etcd-listen-peer-urls {}". - format(kwargs['etcd_listen_peer_urls']) - if 'etcd_listen_peer_urls' in kwargs else '') - cluster_args = (cluster_args + "--etcd-listen-client-urls {}". - format(kwargs['etcd_listen_client_urls']) - if 'etcd_listen_client_urls' in kwargs else '') - - run("cd kube-deploy/docker-multinode;sudo ./master.sh" - " --k8s-version {}" - " --etcd-version {}" - " --flannel-version {}" - " --flannel-network {}" - " --flannel-ipmasq {}" - " --flannel-backend {}" - " --restart-policy {}" - " --arch {}" - " --net-interface {}" - " {}".format( - k8s_version, - etcd_version, - flannel_version, - flannel_network, - flannel_ipmasq, - flannel_backend, - restart_policy, - arch, - net_interface, - cluster_args - )) - - - -def start_worker_bmc(**kwargs): - run("sudo setenforce 0") - run("sudo systemctl disable firewalld") - run("sudo systemctl stop firewalld") - run("rm -rf kube-deploy") - f = ctx.download_resource("resources/kube-deploy.tgz") - put(f, "/tmp/kube-deploy.tgz") - run("tar xzf /tmp/kube-deploy.tgz") - os.remove(f) - - if 'master_ip' not in kwargs or 'k8s_settings' not in kwargs: - raise NonRecoverableError("master_ip and k8s_settings required") - - master_ip = kwargs['master_ip'] - k8s_version = (kwargs['k8s_settings']['k8s_version'] - if 'k8s_version' in kwargs['k8s_settings'] else 'v1.3.0') - etcd_version = (kwargs['k8s_settings']['etcd_version'] - if 'etcd_version' in kwargs['k8s_settings'] else '2.2.5') - flannel_version = (kwargs['k8s_settings']['flannel_version'] - if 'flannel_version' in kwargs['k8s_settings'] - else 'v0.6.2') - flannel_network = (kwargs['k8s_settings']['flannel_network'] - if 'flannel_network' in kwargs['k8s_settings'] - else '10.1.0.0/16') - flannel_ipmasq = (kwargs['k8s_settings']['flannel_ipmasq'] - if 'flannel_ipmasq' in kwargs['k8s_settings'] - else 'true') - flannel_backend = (kwargs['k8s_settings']['flannel_backend'] - if 'flannel_backend' in kwargs['k8s_settings'] - else 'udp') - restart_policy = (kwargs['k8s_settings']['restart_policy'] - if 'restart_policy' in kwargs['k8s_settings'] - else 'unless-stopped') - arch = (kwargs['k8s_settings']['arch'] - if 'arch' in kwargs['k8s_settings'] else 'amd64') - net_interface = (kwargs['k8s_settings']['net_interface'] - if 'net_interface' in kwargs['k8s_settings'] else 'eth0') - - run("cd kube-deploy/docker-multinode;sudo ./worker.sh" - " --master-ip {} " - " --k8s-version {}" - " --etcd-version {}" - " --flannel-version {}" - " --flannel-network {}" - " --flannel-ipmasq {}" - " --flannel-backend {}" - " --restart-policy {}" - " --arch {}" - " --net-interface {}".format( - master_ip, - k8s_version, - etcd_version, - flannel_version, - flannel_network, - flannel_ipmasq, - flannel_backend, - restart_policy, - arch, - net_interface - )) - -def start_worker(**kwargs): - run("rm -rf kube-deploy") - f = ctx.download_resource("resources/kube-deploy.tgz") - put(f, "/tmp/kube-deploy.tgz") - run("tar xzf /tmp/kube-deploy.tgz") - os.remove(f) - - if 'master_ip' not in kwargs or 'k8s_settings' not in kwargs: - raise NonRecoverableError("master_ip and k8s_settings required") - - master_ip = kwargs['master_ip'] - k8s_version = (kwargs['k8s_settings']['k8s_version'] - if 'k8s_version' in kwargs['k8s_settings'] else 'v1.3.0') - etcd_version = (kwargs['k8s_settings']['etcd_version'] - if 'etcd_version' in kwargs['k8s_settings'] else '2.2.5') - flannel_version = (kwargs['k8s_settings']['flannel_version'] - if 'flannel_version' in kwargs['k8s_settings'] - else 'v0.6.2') - flannel_network = (kwargs['k8s_settings']['flannel_network'] - if 'flannel_network' in kwargs['k8s_settings'] - else '10.1.0.0/16') - flannel_ipmasq = (kwargs['k8s_settings']['flannel_ipmasq'] - if 'flannel_ipmasq' in kwargs['k8s_settings'] - else 'true') - flannel_backend = (kwargs['k8s_settings']['flannel_backend'] - if 'flannel_backend' in kwargs['k8s_settings'] - else 'udp') - restart_policy = (kwargs['k8s_settings']['restart_policy'] - if 'restart_policy' in kwargs['k8s_settings'] - else 'unless-stopped') - arch = (kwargs['k8s_settings']['arch'] - if 'arch' in kwargs['k8s_settings'] else 'amd64') - net_interface = (kwargs['k8s_settings']['net_interface'] - if 'net_interface' in kwargs['k8s_settings'] else 'eth0') - - run("cd kube-deploy/docker-multinode;sudo ./worker.sh" - " --master-ip {} " - " --k8s-version {}" - " --etcd-version {}" - " --flannel-version {}" - " --flannel-network {}" - " --flannel-ipmasq {}" - " --flannel-backend {}" - " --restart-policy {}" - " --arch {}" - " --net-interface {}".format( - master_ip, - k8s_version, - etcd_version, - flannel_version, - flannel_network, - flannel_ipmasq, - flannel_backend, - restart_policy, - arch, - net_interface - )) - diff --git a/scripts/install_docker.sh b/scripts/install_docker.sh deleted file mode 100644 index cafe01a..0000000 --- a/scripts/install_docker.sh +++ /dev/null @@ -1,506 +0,0 @@ -#!/bin/sh -set -e -# -# This script is meant for quick & easy install via: -# 'curl -sSL https://get.docker.com/ | sh' -# or: -# 'wget -qO- https://get.docker.com/ | sh' -# -# For test builds (ie. release candidates): -# 'curl -fsSL https://test.docker.com/ | sh' -# or: -# 'wget -qO- https://test.docker.com/ | sh' -# -# For experimental builds: -# 'curl -fsSL https://experimental.docker.com/ | sh' -# or: -# 'wget -qO- https://experimental.docker.com/ | sh' -# -# Docker Maintainers: -# To update this script on https://get.docker.com, -# use hack/release.sh during a normal release, -# or the following one-liner for script hotfixes: -# aws s3 cp --acl public-read hack/install.sh s3://get.docker.com/index -# - -url="https://get.docker.com/" -apt_url="https://apt.dockerproject.org" -yum_url="https://yum.dockerproject.org" -gpg_fingerprint="58118E89F3A912897C070ADBF76221572C52609D" - -key_servers=" -ha.pool.sks-keyservers.net -pgp.mit.edu -keyserver.ubuntu.com -" - -command_exists() { - command -v "$@" > /dev/null 2>&1 -} - -echo_docker_as_nonroot() { - if command_exists docker && [ -e /var/run/docker.sock ]; then - ( - set -x - $sh_c 'docker version' - ) || true - fi - your_user=your-user - [ "$user" != 'root' ] && your_user="$user" - # intentionally mixed spaces and tabs here -- tabs are stripped by "<<-EOF", spaces are kept in the output - cat <<-EOF - - If you would like to use Docker as a non-root user, you should now consider - adding your user to the "docker" group with something like: - - sudo usermod -aG docker $your_user - - Remember that you will have to log out and back in for this to take effect! - - EOF -} - -# Check if this is a forked Linux distro -check_forked() { - - # Check for lsb_release command existence, it usually exists in forked distros - if command_exists lsb_release; then - # Check if the `-u` option is supported - set +e - lsb_release -a -u > /dev/null 2>&1 - lsb_release_exit_code=$? - set -e - - # Check if the command has exited successfully, it means we're in a forked distro - if [ "$lsb_release_exit_code" = "0" ]; then - # Print info about current distro - cat <<-EOF - You're using '$lsb_dist' version '$dist_version'. - EOF - - # Get the upstream release info - lsb_dist=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'id' | cut -d ':' -f 2 | tr -d '[[:space:]]') - dist_version=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'codename' | cut -d ':' -f 2 | tr -d '[[:space:]]') - - # Print info about upstream distro - cat <<-EOF - Upstream release is '$lsb_dist' version '$dist_version'. - EOF - else - if [ -r /etc/debian_version ] && [ "$lsb_dist" != "ubuntu" ]; then - # We're Debian and don't even know it! - lsb_dist=debian - dist_version="$(cat /etc/debian_version | sed 's/\/.*//' | sed 's/\..*//')" - case "$dist_version" in - 8|'Kali Linux 2') - dist_version="jessie" - ;; - 7) - dist_version="wheezy" - ;; - esac - fi - fi - fi -} - -rpm_import_repository_key() { - local key=$1; shift - local tmpdir=$(mktemp -d) - chmod 600 "$tmpdir" - for key_server in $key_servers ; do - gpg --homedir "$tmpdir" --keyserver "$key_server" --recv-keys "$key" && break - done - gpg --homedir "$tmpdir" -k "$key" >/dev/null - gpg --homedir "$tmpdir" --export --armor "$key" > "$tmpdir"/repo.key - rpm --import "$tmpdir"/repo.key - rm -rf "$tmpdir" -} - -semverParse() { - major="${1%%.*}" - minor="${1#$major.}" - minor="${minor%%.*}" - patch="${1#$major.$minor.}" - patch="${patch%%[-.]*}" -} - -do_install() { - case "$(uname -m)" in - *64) - ;; - *) - cat >&2 <<-'EOF' - Error: you are not using a 64bit platform. - Docker currently only supports 64bit platforms. - EOF - exit 1 - ;; - esac - - if command_exists docker; then - version="$(docker -v | awk -F '[ ,]+' '{ print $3 }')" - MAJOR_W=1 - MINOR_W=10 - - semverParse $version - - shouldWarn=0 - if [ $major -lt $MAJOR_W ]; then - shouldWarn=1 - fi - - if [ $major -le $MAJOR_W ] && [ $minor -lt $MINOR_W ]; then - shouldWarn=1 - fi - - cat >&2 <<-'EOF' - Warning: the "docker" command appears to already exist on this system. - - If you already have Docker installed, this script can cause trouble, which is - why we're displaying this warning and provide the opportunity to cancel the - installation. - - If you installed the current Docker package using this script and are using it - EOF - - if [ $shouldWarn -eq 1 ]; then - cat >&2 <<-'EOF' - again to update Docker, we urge you to migrate your image store before upgrading - to v1.10+. - - You can find instructions for this here: - https://github.com/docker/docker/wiki/Engine-v1.10.0-content-addressability-migration - EOF - else - cat >&2 <<-'EOF' - again to update Docker, you can safely ignore this message. - EOF - fi - - cat >&2 <<-'EOF' - - You may press Ctrl+C now to abort this script. - EOF - ( set -x; sleep 20 ) - fi - - user="$(id -un 2>/dev/null || true)" - - sh_c='sh -c' - if [ "$user" != 'root' ]; then - if command_exists sudo; then - sh_c='sudo -E sh -c' - elif command_exists su; then - sh_c='su -c' - else - cat >&2 <<-'EOF' - Error: this installer needs the ability to run commands as root. - We are unable to find either "sudo" or "su" available to make this happen. - EOF - exit 1 - fi - fi - - curl='' - if command_exists curl; then - curl='curl -sSL' - elif command_exists wget; then - curl='wget -qO-' - elif command_exists busybox && busybox --list-modules | grep -q wget; then - curl='busybox wget -qO-' - fi - - # check to see which repo they are trying to install from - if [ -z "$repo" ]; then - repo='main' - if [ "https://test.docker.com/" = "$url" ]; then - repo='testing' - elif [ "https://experimental.docker.com/" = "$url" ]; then - repo='experimental' - fi - fi - - # perform some very rudimentary platform detection - lsb_dist='' - dist_version='' - if command_exists lsb_release; then - lsb_dist="$(lsb_release -si)" - fi - if [ -z "$lsb_dist" ] && [ -r /etc/lsb-release ]; then - lsb_dist="$(. /etc/lsb-release && echo "$DISTRIB_ID")" - fi - if [ -z "$lsb_dist" ] && [ -r /etc/debian_version ]; then - lsb_dist='debian' - fi - if [ -z "$lsb_dist" ] && [ -r /etc/fedora-release ]; then - lsb_dist='fedora' - fi - if [ -z "$lsb_dist" ] && [ -r /etc/oracle-release ]; then - lsb_dist='oracleserver' - fi - if [ -z "$lsb_dist" ]; then - if [ -r /etc/centos-release ] || [ -r /etc/redhat-release ]; then - lsb_dist='centos' - fi - fi - if [ -z "$lsb_dist" ] && [ -r /etc/os-release ]; then - lsb_dist="$(. /etc/os-release && echo "$ID")" - fi - - lsb_dist="$(echo "$lsb_dist" | tr '[:upper:]' '[:lower:]')" - - case "$lsb_dist" in - - ubuntu) - if command_exists lsb_release; then - dist_version="$(lsb_release --codename | cut -f2)" - fi - if [ -z "$dist_version" ] && [ -r /etc/lsb-release ]; then - dist_version="$(. /etc/lsb-release && echo "$DISTRIB_CODENAME")" - fi - ;; - - debian) - dist_version="$(cat /etc/debian_version | sed 's/\/.*//' | sed 's/\..*//')" - case "$dist_version" in - 8) - dist_version="jessie" - ;; - 7) - dist_version="wheezy" - ;; - esac - ;; - - oracleserver) - # need to switch lsb_dist to match yum repo URL - lsb_dist="oraclelinux" - dist_version="$(rpm -q --whatprovides redhat-release --queryformat "%{VERSION}\n" | sed 's/\/.*//' | sed 's/\..*//' | sed 's/Server*//')" - ;; - - fedora|centos) - dist_version="$(rpm -q --whatprovides redhat-release --queryformat "%{VERSION}\n" | sed 's/\/.*//' | sed 's/\..*//' | sed 's/Server*//')" - ;; - - *) - if command_exists lsb_release; then - dist_version="$(lsb_release --codename | cut -f2)" - fi - if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then - dist_version="$(. /etc/os-release && echo "$VERSION_ID")" - fi - ;; - - - esac - - # Check if this is a forked Linux distro - check_forked - - # Run setup for each distro accordingly - case "$lsb_dist" in - amzn) - ( - set -x - $sh_c 'sleep 3; yum -y -q install docker' - ) - echo_docker_as_nonroot - exit 0 - ;; - - 'opensuse project'|opensuse) - echo 'Going to perform the following operations:' - if [ "$repo" != 'main' ]; then - echo ' * add repository obs://Virtualization:containers' - fi - echo ' * install Docker' - $sh_c 'echo "Press CTRL-C to abort"; sleep 3' - - if [ "$repo" != 'main' ]; then - # install experimental packages from OBS://Virtualization:containers - ( - set -x - zypper -n ar -f obs://Virtualization:containers Virtualization:containers - rpm_import_repository_key 55A0B34D49501BB7CA474F5AA193FBB572174FC2 - ) - fi - ( - set -x - zypper -n install docker - ) - echo_docker_as_nonroot - exit 0 - ;; - 'suse linux'|sle[sd]) - echo 'Going to perform the following operations:' - if [ "$repo" != 'main' ]; then - echo ' * add repository obs://Virtualization:containers' - echo ' * install experimental Docker using packages NOT supported by SUSE' - else - echo ' * add the "Containers" module' - echo ' * install Docker using packages supported by SUSE' - fi - $sh_c 'echo "Press CTRL-C to abort"; sleep 3' - - if [ "$repo" != 'main' ]; then - # install experimental packages from OBS://Virtualization:containers - echo >&2 'Warning: installing experimental packages from OBS, these packages are NOT supported by SUSE' - ( - set -x - zypper -n ar -f obs://Virtualization:containers/SLE_12 Virtualization:containers - rpm_import_repository_key 55A0B34D49501BB7CA474F5AA193FBB572174FC2 - ) - else - # Add the containers module - # Note well-1: the SLE machine must already be registered against SUSE Customer Center - # Note well-2: the `-r ""` is required to workaround a known issue of SUSEConnect - ( - set -x - SUSEConnect -p sle-module-containers/12/x86_64 -r "" - ) - fi - ( - set -x - zypper -n install docker - ) - echo_docker_as_nonroot - exit 0 - ;; - - ubuntu|debian) - export DEBIAN_FRONTEND=noninteractive - - did_apt_get_update= - apt_get_update() { - if [ -z "$did_apt_get_update" ]; then - ( set -x; $sh_c 'sleep 3; apt-get update' ) - did_apt_get_update=1 - fi - } - - # aufs is preferred over devicemapper; try to ensure the driver is available. - if ! grep -q aufs /proc/filesystems && ! $sh_c 'modprobe aufs'; then - if uname -r | grep -q -- '-generic' && dpkg -l 'linux-image-*-generic' | grep -qE '^ii|^hi' 2>/dev/null; then - kern_extras="linux-image-extra-$(uname -r) linux-image-extra-virtual" - - apt_get_update - ( set -x; $sh_c 'sleep 3; apt-get install -y -q '"$kern_extras" ) || true - - if ! grep -q aufs /proc/filesystems && ! $sh_c 'modprobe aufs'; then - echo >&2 'Warning: tried to install '"$kern_extras"' (for AUFS)' - echo >&2 ' but we still have no AUFS. Docker may not work. Proceeding anyways!' - ( set -x; sleep 10 ) - fi - else - echo >&2 'Warning: current kernel is not supported by the linux-image-extra-virtual' - echo >&2 ' package. We have no AUFS support. Consider installing the packages' - echo >&2 ' linux-image-virtual kernel and linux-image-extra-virtual for AUFS support.' - ( set -x; sleep 10 ) - fi - fi - - # install apparmor utils if they're missing and apparmor is enabled in the kernel - # otherwise Docker will fail to start - if [ "$(cat /sys/module/apparmor/parameters/enabled 2>/dev/null)" = 'Y' ]; then - if command -v apparmor_parser >/dev/null 2>&1; then - echo 'apparmor is enabled in the kernel and apparmor utils were already installed' - else - echo 'apparmor is enabled in the kernel, but apparmor_parser missing' - apt_get_update - ( set -x; $sh_c 'sleep 3; apt-get install -y -q apparmor' ) - fi - fi - - if [ ! -e /usr/lib/apt/methods/https ]; then - apt_get_update - ( set -x; $sh_c 'sleep 3; apt-get install -y -q apt-transport-https ca-certificates' ) - fi - if [ -z "$curl" ]; then - apt_get_update - ( set -x; $sh_c 'sleep 3; apt-get install -y -q curl ca-certificates' ) - curl='curl -sSL' - fi - ( - set -x - for key_server in $key_servers ; do - $sh_c "apt-key adv --keyserver hkp://${key_server}:80 --recv-keys ${gpg_fingerprint}" && break - done - $sh_c "apt-key adv -k ${gpg_fingerprint} >/dev/null" - $sh_c "mkdir -p /etc/apt/sources.list.d" - $sh_c "echo deb [arch=$(dpkg --print-architecture)] ${apt_url}/repo ${lsb_dist}-${dist_version} ${repo} > /etc/apt/sources.list.d/docker.list" - $sh_c 'sleep 3; apt-get update; apt-get install -y -q docker-engine=1.11.2-0~trusty' - ) - echo_docker_as_nonroot - exit 0 - ;; - - fedora|centos|oraclelinux) - $sh_c "cat >/etc/yum.repos.d/docker-${repo}.repo" <<-EOF - [docker-${repo}-repo] - name=Docker ${repo} Repository - baseurl=${yum_url}/repo/${repo}/${lsb_dist}/${dist_version} - enabled=1 - gpgcheck=1 - gpgkey=${yum_url}/gpg - EOF - if [ "$lsb_dist" = "fedora" ] && [ "$dist_version" -ge "22" ]; then - ( - set -x - $sh_c 'sleep 3; dnf -y -q install docker-engine' - ) - else - ( - set -x - $sh_c 'sleep 3; yum -y -q install docker-engine' - ) - fi - echo_docker_as_nonroot - exit 0 - ;; - gentoo) - if [ "$url" = "https://test.docker.com/" ]; then - # intentionally mixed spaces and tabs here -- tabs are stripped by "<<-'EOF'", spaces are kept in the output - cat >&2 <<-'EOF' - - You appear to be trying to install the latest nightly build in Gentoo.' - The portage tree should contain the latest stable release of Docker, but' - if you want something more recent, you can always use the live ebuild' - provided in the "docker" overlay available via layman. For more' - instructions, please see the following URL:' - - https://github.com/tianon/docker-overlay#using-this-overlay' - - After adding the "docker" overlay, you should be able to:' - - emerge -av =app-emulation/docker-9999' - - EOF - exit 1 - fi - - ( - set -x - $sh_c 'sleep 3; emerge app-emulation/docker' - ) - exit 0 - ;; - esac - - # intentionally mixed spaces and tabs here -- tabs are stripped by "<<-'EOF'", spaces are kept in the output - cat >&2 <<-'EOF' - - Either your platform is not easily detectable, is not supported by this - installer script (yet - PRs welcome! [hack/install.sh]), or does not yet have - a package for Docker. Please visit the following URL for more detailed - installation instructions: - - https://docs.docker.com/engine/installation/ - - EOF - exit 1 -} - -# wrapped up in a function so that we have some protection against only getting -# half the file during "curl | sh" -do_install diff --git a/scripts/kubectl.py b/scripts/kubectl.py deleted file mode 100644 index fb5a081..0000000 --- a/scripts/kubectl.py +++ /dev/null @@ -1,44 +0,0 @@ -#!/usr/bin/env python - -import stat -import urllib -from cloudify.exceptions import NonRecoverableError -from cloudify import ctx -import os -from cloudify.state import ctx_parameters as inputs -import subprocess - -PATH = os.path.join( - os.path.expanduser('~'), - 'kubectl' -) - - -if __name__ == '__main__': - - ctx.logger.info('Installing kubectl') - - url = inputs['kubectl_url'] - - try: - urllib.urlretrieve(url, PATH) - except: - raise NonRecoverableError() - - st = os.stat(PATH) - os.chmod(PATH, st.st_mode | stat.S_IEXEC) - - command = 'sudo mv {0} /usr/local/bin/kubectl'.format(PATH) - - result = subprocess.Popen( - command.split(), - cwd=os.path.expanduser('~') - ) - - output = result.communicate() - - if result.returncode: - raise NonRecoverableError( - 'Error: {0} ' - 'Output: {1}'.format(result.returncode, output) - ) diff --git a/scripts/start_master.py b/scripts/start_master.py new file mode 100644 index 0000000..7dbb8dd --- /dev/null +++ b/scripts/start_master.py @@ -0,0 +1,68 @@ +#!/usr/bin/env python + +import os +import subprocess +from cloudify import ctx +from cloudify.exceptions import RecoverableError + + +def execute_command(_command): + + ctx.logger.debug('_command {0}.'.format(_command)) + + subprocess_args = { + 'args': _command.split(), + 'stdout': subprocess.PIPE, + 'stderr': subprocess.PIPE + } + + ctx.logger.debug('subprocess_args {0}.'.format(subprocess_args)) + + process = subprocess.Popen(**subprocess_args) + output, error = process.communicate() + + ctx.logger.debug('command: {0} '.format(_command)) + ctx.logger.debug('output: {0} '.format(output)) + ctx.logger.debug('error: {0} '.format(error)) + ctx.logger.debug('process.returncode: {0} '.format(process.returncode)) + + if process.returncode: + ctx.logger.error('Running `{0}` returns error.'.format(_command)) + return False + + return output + + +def check_kubedns_status(_get_pods): + + ctx.logger.debug('get_pods: {0} '.format(_get_pods)) + + for pod_line in _get_pods.split('\n'): + ctx.logger.debug('pod_line: {0} '.format(pod_line)) + try: + _namespace, _name, _ready, _status, _restarts, _age = pod_line.split() + except ValueError: + pass + else: + if 'kube-dns' in _name and 'Running' not in _status: + return False + elif 'kube-dns' in _name and 'Running' in _status: + return True + return False + + +if __name__ == '__main__': + + admin_file_dest = os.path.join(os.path.expanduser('~'), 'admin.conf') + os.environ['KUBECONFIG'] = admin_file_dest + + get_pods = execute_command('kubectl get pods --all-namespaces') + + if not check_kubedns_status(get_pods): + raise RecoverableError('kube-dns not Running') + + with open(admin_file_dest, 'r') as outfile: + configuration_file_contents = outfile.read() + + ctx.instance.runtime_properties['configuration_file_content'] = \ + configuration_file_contents diff --git a/service-blueprint.yaml b/service-blueprint.yaml deleted file mode 100644 index bc07824..0000000 --- a/service-blueprint.yaml +++ /dev/null @@ -1,59 +0,0 @@ -tosca_definitions_version: cloudify_dsl_1_3 - -imports: - - http://www.getcloudify.org/spec/cloudify/3.4/types.yaml - - plugins/cloudify-kubernetes-plugin/plugin.yaml - - plugins/cloudify-proxy-plugin/plugin.yaml - -inputs: - - kubernetes_deployment: - description: > - The kubernetes deployment id - default: kubernetes - service_port: - description: the service port - type: integer - default: 30003 - -node_templates: - -# kubernetes_master: -# type: cloudify.kubernetes.Master -# properties: -# ip: 172.16.0.162 - - kubernetes_proxy: - type: cloudify.nodes.DeploymentProxy - properties: - inherit_outputs: - - 'kubernetes_info' - interfaces: - cloudify.interfaces.lifecycle: - create: - inputs: - deployment_id: { get_input: kubernetes_deployment } - start: - inputs: - deployment_id: { get_input: kubernetes_deployment } - stop: - inputs: - deployment_id: { get_input: kubernetes_deployment } - - nginx: - type: cloudify.kubernetes.Microservice - properties: - name: nginx - ssh_username: ubuntu - ssh_keyfilename: /root/.ssh/agent_key.pem - config_files: - - file: resources/pod.yaml - - file: resources/service.yaml - overrides: - - { concat: [ "['spec']['ports'][0]['nodePort']= ", { get_input: service_port} ] } - relationships: - - type: cloudify.kubernetes.relationships.connected_to_master - target: kubernetes_proxy - #target: kubernetes_master - - diff --git a/types/cloud_config/cloud-config.yaml b/types/cloud_config/cloud-config.yaml new file mode 100644 index 0000000..2fe4b20 --- /dev/null +++ b/types/cloud_config/cloud-config.yaml @@ -0,0 +1,13 @@ +node_types: + + cloudify.nodes.CloudConfig: + derived_from: cloudify.nodes.Root + properties: + resource_config: + default: + encode_base64: false + interfaces: + cloudify.interfaces.lifecycle: + create: + implementation: scripts/cloud_config/create.py + executor: central_deployment_agent diff --git a/plugins/cloudify-proxy-plugin/blueprints/__init__.py b/types/docker.yaml similarity index 100% rename from plugins/cloudify-proxy-plugin/blueprints/__init__.py rename to types/docker.yaml diff --git a/types/kubernetes.yaml b/types/kubernetes.yaml new file mode 100644 index 0000000..7a0054f --- /dev/null +++ b/types/kubernetes.yaml @@ -0,0 +1,28 @@ +node_types: + + cloudify.nodes.Kubernetes: + derived_from: cloudify.nodes.Root + interfaces: + cloudify.interfaces.lifecycle: + create: + implementation: scripts/create.py + + cloudify.nodes.Kubernetes.Master: + derived_from: cloudify.nodes.Root + interfaces: + cloudify.interfaces.lifecycle: + create: + implementation: scripts/create.py + configure: + implementation: scripts/configure_master.py + start: + implementation: scripts/start_master.py + + cloudify.nodes.Kubernetes.Node: + derived_from: cloudify.nodes.Root + interfaces: + cloudify.interfaces.lifecycle: + create: + implementation: scripts/create.py + configure: + implementation: scripts/configure_node.py