Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

problems using different users to create and apply a plan #2594

Open
tormodmacleod opened this issue Sep 27, 2024 · 1 comment
Open

problems using different users to create and apply a plan #2594

tormodmacleod opened this issue Sep 27, 2024 · 1 comment
Assignees
Labels

Comments

@tormodmacleod
Copy link

hello,

when we create a plan we do it with a read-only role. we then apply plans with a role with admin permissions. however, when trying to create resources on our eks cluster we get an error which references the role that created the plan

Terraform Version, Provider Version and Kubernetes Version

$ terraform -v
Terraform v1.9.6
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v5.68.0
+ provider registry.terraform.io/hashicorp/cloudinit v2.3.5
+ provider registry.terraform.io/hashicorp/kubernetes v2.32.0
+ provider registry.terraform.io/hashicorp/null v3.2.3
+ provider registry.terraform.io/hashicorp/time v0.12.1
+ provider registry.terraform.io/hashicorp/tls v4.0.6

$ kubectl version --output=yaml
clientVersion:
  buildDate: "2024-06-11T00:00:00Z"
  compiler: gc
  gitCommit: fb63712e1d017142977e88a23644b8e48b775665
  gitTreeState: archive
  gitVersion: v1.27.15
  goVersion: go1.21.10
  major: "1"
  minor: "27"
  platform: linux/amd64
kustomizeVersion: v5.0.1
serverVersion:
  buildDate: "2024-08-26T21:27:41Z"
  compiler: gc
  gitCommit: 3277d87d88d0bf66b6368ce57e49b2f2aab01b0d
  gitTreeState: clean
  gitVersion: v1.29.8-eks-a737599
  goVersion: go1.22.5
  major: "1"
  minor: 29+
  platform: linux/amd64

Affected Resource(s)

  • kubernetes_secret_v1

Terraform Configuration Files

resource "kubernetes_secret_v1" "test" {
  metadata {
    name = "test-secret"
  }

  data = {
    username = "admin"
    password = "P4ssw0rd"
  }

  type = "Opaque"
}

Debug Output

https://gist.github.com/tormodmacleod/999bfff342122873a49eb4389a709265

Panic Output

Steps to Reproduce

  1. assume plan role
  2. terraform plan -out /tmp/plan
  3. assume apply role
  4. terraform apply "/tmp/plan"

Expected Behavior

secret is created

Actual Behavior

$ aws sts get-caller-identity 
{
    "UserId": "AROAUMIBCCYJWM2JCOOAO:apply",
    "Account": "123456789012",
    "Arn": "arn:aws:sts::123456789012:assumed-role/apply/apply"
}
$ terraform apply "/tmp/plan"
kubernetes_secret_v1.test: Creating...
╷
│ Error: secrets is forbidden: User "arn:aws:sts::123456789012:assumed-role/plan/plan" cannot create resource "secrets" in API group "" in the namespace "default"
│ 
│   with kubernetes_secret_v1.test,
│   on test.tf line 5, in resource "kubernetes_secret_v1" "test":
│    5: resource "kubernetes_secret_v1" "test" {

Important Factoids

plan role has aws read-only privileges and AmazonEKSAdminViewPolicy

resource "aws_iam_role" "plan" {
  name = "plan"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          AWS = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"
        }
      },
    ]
  })

  managed_policy_arns = ["arn:aws:iam::aws:policy/ReadOnlyAccess"]
}

resource "aws_eks_access_entry" "plan" {
  cluster_name      = module.eks.cluster_name
  principal_arn     = aws_iam_role.plan.arn
  kubernetes_groups = []
  type              = "STANDARD"
}

resource "aws_eks_access_policy_association" "plan" {
  cluster_name  = module.eks.cluster_name
  policy_arn    = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminViewPolicy"
  principal_arn = aws_iam_role.plan.arn

  access_scope {
    type = "cluster"
  }
}

apply role has aws admin privileges and AmazonEKSClusterAdminPolicy

resource "aws_iam_role" "apply" {
  name = "apply"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          AWS = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"
        }
      },
    ]
  })

  managed_policy_arns = ["arn:aws:iam::aws:policy/AdministratorAccess"]
}

resource "aws_eks_access_entry" "apply" {
  cluster_name      = module.eks.cluster_name
  principal_arn     = aws_iam_role.apply.arn
  kubernetes_groups = []
  type              = "STANDARD"
}

resource "aws_eks_access_policy_association" "apply" {
  cluster_name  = module.eks.cluster_name
  policy_arn    = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
  principal_arn = aws_iam_role.apply.arn

  access_scope {
    type = "cluster"
  }
}

References

it's a gke issue but kind of similar

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
    debug.txt
@alexsomesan
Copy link
Member

Can you please share how you are configuring the provider?
By that I mean the provider "kubernetes" { ... } block and any KUBE_* environment variables being present in the execution environment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants