AMI with Vault and Consul binaries installed. DNSmasq is also configured to use the local Consul agent as its DNS server.
This is based on this example.
All the following commands to run are assumed to run in this vault/
directory.
In addition to Ansible (2.7) and Packer, you will need to install the following on your machine:
You will need to have a TLS certificate generated for vault. This usually requires you to have
generated an existing CA. Refer to
ca
for instructions on how to setup a CA with the associated keys.
For example, we will generate the certificate to the vault/cert
directory:
# Generate key pair and CSR
cfssl genkey -config "../../ca/config.json" \
-profile peer ../../ca/vault-cert/csr.json \
| cfssljson -bare cert/cert
At this point, you must have decrypted the AWS-KMS encrypted CA private key (i.e. ca.key
to
ca-key.pem
) so that the CA private key can be used to sign the CSR.
Ensure that the files below:
ca.key
ca.pem
cli.json
csr.json
are of correct values and placed in ../../ca/root/
. Then perform the decryption step as shown in
the guide, whose adapted command is shown below to get
back the original CA private key:
aws kms decrypt \
--ciphertext-blob fileb://../../ca/root/ca.key \
--output text \
--query Plaintext \
--cli-input-json file://../../ca/root/cli.json \
| base64 --decode \
> ../../ca/root/ca-key.pem
With the original CA private key in place, sign the CSR with the following:
# Sign the CSR
cfssl sign -ca "../../ca/root/ca.pem" \
-ca-key "../../ca/root/ca-key.pem" \
-config "../../ca/config.json" \
-profile peer \
cert/cert.csr \
| cfssljson -bare cert/cert
After you have generated the certificate, you will need to encrypt the private key of the certificate before we copy it over to the AMI.
We will use the kms-aes
Ansible playbooks and roles to
handle the encryption and decryption.
Checkout the repository to a directory and follow the instructions according to the Vault playbook.
For example, with the provided example cli.json
and our terraform
KMS key, we can do the
following to generate a data encryption key and to encrypt our certificate:
ansible-playbook \
-i "localhost," \
-c "local" \
-t "generate_key,encrypt" \
-e "key_id=alias/terraform" \
-e "cli_json=$(pwd)/cert/cli.json" \
-e "key_output=$(pwd)/cert/aes.key" \
-e "vault_file=$(pwd)/cert/cert-key.pem" \
-e "encrypted_vault_file=$(pwd)/cert/cert.key" \
/path/to/playbook/vault.yml
This will output the encrypted keys and other files to their default location. Otherwise, you can configure the path to the keys based on the options listed in the next section.
See this page for more information.
ami_base_name
: Base name for the AMI image. The timestamp will be appendedaws_region
: AWS Regionsubnet_id
: ID of subnet to run the builder instance intemporary_security_group_source_cidr
: Temporary CIDR to allow SSH access fromassociate_public_ip_address
: Associate totrue
if the machine provisioned is to be connected via the internetssh_interface
: One ofpublic_ip
,private_ip
,public_dns
orprivate_dns
. If set, either the public IP address, private IP address, public DNS name or private DNS name will used as the host for SSH. The default behaviour if inside a VPC is to use the public IP address if available, otherwise the private IP address will be used. If not in a VPC the public DNS name will be used.vault_version
: Version of Vault to installconsul_module_version
: Version of the Terraform Consul repository to usevault_module_version
: Version of the Vault Module to use.vault_ui_enable
: Enable UI for Vault or not. Defaults totrue
.consul_version
: Version of Consul to installtls_cert_file_src
: Path to the certificate file for Vault to use. This defaults tocert/cert.pem
if you used the instructions above.encrypted_tls_key_file_src
: Encrypted private key for the certificate. This defaults tocert/cert.key
if you used the instructions above.encrypted_aes_key_src
: AES data key used to encrypt the private key, which is in turned encrypted by AWS KMS. Defaults tocert/aes.key
if you used the instructions above.cli_json_src
: The AWS CLI JSON file used to encrypt the AES key. This defaults tocert/cli.json
if you used the instructions above.td_agent_config_file
: Path totd-agent
config file to template copy from. Installtd-agent
if path is non-empty.td_agent_config_vars_file
: Path to variables file to include for value interpolation fortd-agent
config file. Only included if the value is not empty.include_vars
includes the variables intoconfig_vars
variable, i.e. ifxxx
value is defined in the variables file, you will need to do{{ config_vars.xxx }}
to get the interpolation working.ca_certificate
: Path to the CA certificate you have generated to install on the machine. Set to empty to not install anything.
After the initial bootstrap, if you have applied one of the following post bootstrap modules, you should set the following options to install whatever pre-requisite is required in the AMI:
- Vault PKI
The following options are common to all of the integrations:
consul_host
: The host for which Consul is accessible. Defaults to empty. If set to empty, all post bootstrap integration will be disabled.consul_port
: Port where Consul is accessible. Defaults to 443consul_scheme
: Scheme to access Consul. Defaults to "https"consul_token
: ACL token to access Consulconsul_integration_prefix
: Prefix to look for Consul integration values. Do not change this unless you have also modified the values in the appropriate modules. Defaults to "terraform/"
If you have a vars.json
variables file containing changes to the above variables, you may run:
packer build \
-var-file=vars.json \
packer.json
Otherwise if you wish to use the default variable values, simply run:
packer build packer.json
If you have enabled the post-bootstrap integration, you can use terraform output
to get the URL
of your Consul servers. In this way, you can use the same command for pre and post bootstrap builds
of your AMI.
packer build \
-var-file=vars.json \
-var consul_host="$(terraform output consul_api_address || echo -n '')" \
packer.json
This Packer image will the following:
- Consul:
/opt/consul
- Vault:
/opt/vault
td-agent
: As a Debian packagetelegraf
As a Debian packageconsul-template
:/opt/consul-template
You can use consul-template
to template files using data from Consul and Vault. Simply define
the template using a new configuration file (in HCL, with the template
stanza) and write the
configuration file to /opt/consul-template/config
. You can send the SIGHUP
signal using
systemctl kill -s signal SIGHUP consul-template
to ask consul-template
to reload its configuration.