Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Existing cluster demo broken #343

Open
chrislovecnm opened this issue Apr 13, 2022 · 4 comments
Open

Existing cluster demo broken #343

chrislovecnm opened this issue Apr 13, 2022 · 4 comments

Comments

@chrislovecnm
Copy link

Summary

The existing cluster demo builds and new VPC and tries to build a new EKS cluster.

Steps to reproduce the behavior

Run the demo in terraform-aws-eks-jx/examples/existing-cluster

Expected behavior

Reuse the VPC and the EKS cluster

Actual behavior

It builds the VPC and tries to create a cluster.

Terraform version

The output of terraform version is:

Tested on master with tf latest and also on v.1.18.11 with 0.13.5

Module version

master and v1.18.11

Operating system

Linux container

@ankitm123
Copy link
Member

Can you share your main.tf file?
Remember you dont need to include

module "eks" {
depends_on = [module.vpc]
source = "terraform-aws-modules/eks/aws"
version = "12.1.0"
cluster_name = var.cluster_name
cluster_version = var.cluster_version
subnets = (var.cluster_in_private_subnet ? module.vpc.private_subnets : module.vpc.public_subnets)
vpc_id = module.vpc.vpc_id
enable_irsa = true
worker_groups_launch_template = var.enable_worker_group && var.enable_worker_groups_launch_template ? [
for subnet in module.vpc.public_subnets :
{
subnets = [subnet]
asg_desired_capacity = var.lt_desired_nodes_per_subnet
asg_min_size = var.lt_min_nodes_per_subnet
asg_max_size = var.lt_max_nodes_per_subnet
spot_price = (var.enable_spot_instances ? var.spot_price : null)
instance_type = var.node_machine_type
override_instance_types = var.allowed_spot_instance_types
autoscaling_enabled = "true"
public_ip = true
tags = [
{
key = "k8s.io/cluster-autoscaler/enabled"
propagate_at_launch = "false"
value = "true"
},
{
key = "k8s.io/cluster-autoscaler/${var.cluster_name}"
propagate_at_launch = "false"
value = "true"
}
]
}
] : []
worker_groups = var.enable_worker_group && !var.enable_worker_groups_launch_template ? [
{
name = "worker-group-${var.cluster_name}"
instance_type = var.node_machine_type
asg_desired_capacity = var.desired_node_count
asg_min_size = var.min_node_count
asg_max_size = var.max_node_count
spot_price = (var.enable_spot_instances ? var.spot_price : null)
key_name = (var.enable_key_name ? var.key_name : null)
root_volume_type = var.volume_type
root_volume_size = var.volume_size
root_iops = var.iops
tags = [
{
key = "k8s.io/cluster-autoscaler/enabled"
propagate_at_launch = "false"
value = "true"
},
{
key = "k8s.io/cluster-autoscaler/${var.cluster_name}"
propagate_at_launch = "false"
value = "true"
}
]
}
] : []
node_groups = !var.enable_worker_group ? {
eks-jx-node-group = {
ami_type = var.node_group_ami
disk_size = var.node_group_disk_size
desired_capacity = var.desired_node_count
max_capacity = var.max_node_count
min_capacity = var.min_node_count
instance_type = var.node_machine_type
k8s_labels = {
"jenkins-x.io/name" = var.cluster_name
"jenkins-x.io/part-of" = "jx-platform"
"jenkins-x.io/managed-by" = "terraform"
}
additional_tags = {
aws_managed = "true"
}
}
} : {}
workers_additional_policies = [
"arn:${data.aws_partition.current.partition}:iam::aws:policy/AmazonEC2ContainerRegistryPowerUser"
]
map_users = var.map_users
map_roles = var.map_roles
map_accounts = var.map_accounts
cluster_endpoint_private_access = var.cluster_endpoint_private_access
cluster_endpoint_public_access = var.cluster_endpoint_public_access
}
. I should make it very clear in the documentation.

@chrislovecnm
Copy link
Author

chrislovecnm commented Apr 14, 2022

I got the following main.tf working, and yes you do not need to include the vpc and eks stuff, which should not be in the example ;)

// The VPC and EKS resources have been created, just install the cloud resources required by jx
module "eks-jx" {
  source = "jenkins-x/eks-jx/aws"
  region       = var.region
  use_vault    = var.use_vault
  use_asm      = var.use_asm
  cluster_name = var.cluster_name
  is_jx2       = var.is_jx2
  create_eks   = var.create_eks
  create_vpc   = var.create_vpc
  create_nginx = var.create_nginx
  jx_git_url   = var.jx_git_url
  apex_domain  = var.apex_domain
  tls_email    = var.tls_email
  use_kms_s3   = var.use_kms_s3
  registry     = var.registry

  nginx_chart_version = var.nginx_chart_version
  cluster_version     = var.cluster_version
  enable_backup       = var.enable_backup
  jx_bot_username     = var.jx_bot_username
  jx_bot_token        = var.jx_bot_token
  enable_external_dns = var.enable_external_dns

  jx_git_operator_values = var.jx_git_operator_values
  production_letsencrypt = var.production_letsencrypt

}

@chrislovecnm
Copy link
Author

Did you want me to file a PR to fix this?

@ankitm123
Copy link
Member

Did you want me to file a PR to fix this?

Yes, that would be awesome.

I am fine if you want to remove the eks and vpc bits, and only keep the jx part. The reason I kept it was to show ppl how to make an eks and vpc outside of the module, but I can see it can be confusing, and it's just more work to maintain those scripts. Probably a link to examples of vpc and eks module would be sufficient imo (just add some comments in the main.tf)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants