-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Deploy a Milvus Cluster on EC2
This topic describes how to deploy a Milvus cluster on Amazon EC2 with Terraform and Ansible.
This section describes how to use Terraform to provision a Milvus cluster.
Terraform is an infrastructure as code (IaC) software tool. With Terraform, you can provision infrastructure by using declarative configuration files.
You can download template configuration files at Google Drive.
-
main.tf
This file contains the configuration for provisioning a Milvus cluster.
-
variables.tf
This file allows quick editing of variables used to set up or update a Milvus cluster.
-
output.tf
andinventory.tmpl
These files store the metadata of a Milvus cluster. The metadata used in this topic is the
public_ip
for each node instance,private_ip
for each node instance, and all EC2 instance IDs.
This section describes the configuration that a variables.tf
file that contains.
-
Number of nodes
The following template declares an
index_count
variable used to set the number of index nodes.The value ofindex_count
must be greater than or equal to one.variable "index_count" { description = "Amount of index instances to run" type = number default = 5 }
-
Instance type for a node type
The following template declares an
index_ec2_type
variable used to set the instance type for index nodes.variable "index_ec2_type" { description = "Which server type" type = string default = "c5.2xlarge" }
-
Access permission
The following template declares a
key_name
variable and amy_ip
variable. Thekey_name
variable represents the AWS access key. Themy_ip
variable represents the IP address range for a security group.variable "key_name" { description = "Which aws key to use for access into instances, needs to be uploaded already" type = string default = "" } variable "my_ip" { description = "my_ip for security group. used so that ansible and terraform can ssh in" type = string default = "x.x.x.x/32" }
This section describes the configurations that a main.tf
file that contains.
-
Cloud provider and region
The following template uses the
us-east-2
region. See Available Regions for more information.provider "aws" { profile = "default" region = "us-east-2" }
-
Security group
The following template declares a security group that allows incoming traffic from the CIDR address range represented by
my_ip
declared invariables.tf
.resource "aws_security_group" "cluster_sg" { name = "cluster_sg" description = "Allows only me to access" vpc_id = aws_vpc.cluster_vpc.id ingress { description = "All ports from my IP" from_port = 0 to_port = 65535 protocol = "tcp" cidr_blocks = [var.my_ip] } ingress { description = "Full subnet communication" from_port = 0 to_port = 65535 protocol = "all" self = true } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] ipv6_cidr_blocks = ["::/0"] } tags = { Name = "cluster_sg" } }
-
VPC
The following template specifies a VPC with the 10.0.0.0/24 CIDR block on a Milvus cluster.
resource "aws_vpc" "cluster_vpc" { cidr_block = "10.0.0.0/24" tags = { Name = "cluster_vpc" } } resource "aws_internet_gateway" "cluster_gateway" { vpc_id = aws_vpc.cluster_vpc.id tags = { Name = "cluster_gateway" } }
-
Subnets (Optional)
The following template declares a subnet whose traffic is routed to an internet gateway. In this case, the size of the subnet's CIDR block is the same as the VPC's CIDR block.
resource "aws_subnet" "cluster_subnet" { vpc_id = aws_vpc.cluster_vpc.id cidr_block = "10.0.0.0/24" map_public_ip_on_launch = true tags = { Name = "cluster_subnet" } } resource "aws_route_table" "cluster_subnet_gateway_route" { vpc_id = aws_vpc.cluster_vpc.id route { cidr_block = "0.0.0.0/0" gateway_id = aws_internet_gateway.cluster_gateway.id } tags = { Name = "cluster_subnet_gateway_route" } } resource "aws_route_table_association" "cluster_subnet_add_gateway" { subnet_id = aws_subnet.cluster_subnet.id route_table_id = aws_route_table.cluster_subnet_gateway_route.id }
-
Node instances (Nodes)
The following template declares a MinIO node instance. The
main.tf
template file declares nodes of 11 node types. For some node types, you need to setroot_block_device
. See EBS, Ephemeral, and Root Block Devices for more information.resource "aws_instance" "minio_node" { count = var.minio_count ami = "ami-0d8d212151031f51c" instance_type = var.minio_ec2_type key_name = var.key_name subnet_id = aws_subnet.cluster_subnet.id vpc_security_group_ids = [aws_security_group.cluster_sg.id] root_block_device { volume_type = "gp2" volume_size = 1000 } tags = { Name = "minio-${count.index + 1}" } }
-
Open a terminal and navigate to the folder that stores
main.tf
. -
To initialize the configuration, run
terraform init
. -
To apply the configuration, run
terraform apply
and enteryes
when prompted.
You have now provisioned a Milvus cluster with Terraform.
This section describes how to use Ansible to start the Milvus cluster that you have provisioned.
Ansible is a configuration management tool used to automate cloud provisioning and configuration management.
- Install and configure Ansible
You can download template configuration files at Google Drive.
-
Files in the
yaml_files
folderThis folder stores Jinja2 files for each node type. Ansible uses Jinja2 templating. See Introduction for more information about Jinja2.
-
playbook.yaml
This file performs a set of tasks on specific sets of nodes. The template begins with installing Docker and Docker Compose on all node instances on the Milvus cluster.
A playbook runs in sequence from top to bottom. Within each play, tasks also run in sequence from top to bottom.- name: All Servers hosts: etcd_ips_public:pulsar_ips_public:minio_ips_public:data_ips_public:index_ips_public:query_ips_public:proxy_ips_public:root_coordinator_ips_public:data_coordinator_ips_public:query_coordinator_ips_public:index_coordinator_ips_public remote_user: ec2-user become: true tags: - start tasks: - name: Install docker ansible.builtin.yum: name: docker state: present - name: Run docker ansible.builtin.service: name: docker state: started - name: Install or upgrade docker-compose get_url: url : "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-Linux-x86_64" dest: /usr/local/bin/docker-compose mode: 'a+x' force: yes - name: Create symbolic link for docker-compose file: src: "/usr/local/bin/docker-compose" dest: "/usr/bin/docker-compose" state: link
After Docker and Docker Compose are installed on all node instances,
playbook.yaml
starts containers for all node instances in sequence.- name: etcd hosts: etcd_ips_public remote_user: ec2-user become: true tags: - start tasks: - name: Copy etcd config ansible.builtin.template: src: ./yaml_files/etcd.j2 dest: /home/ec2-user/docker-compose.yml owner: ec2-user group: wheel mode: '0644' - name: Run etcd node shell: docker-compose up -d args: chdir: /home/ec2-user/
-
Open a terminal and navigate to the folder that stores
playbook.yaml
. -
Run
ansible-playbook -i inventory playbook.yaml --tags "start"
. -
If successful, all node instances start.
You have now started a Milvus cluster with Ansible.
You can stop all nodes after you do not need a Milvus cluster any longer.
terraform
binary is available on your PATH
. -
Run
terraform destroy
and enteryes
when prompted. -
If successful, all node instances are stopped.