Skip to content
This repository has been archived by the owner on Jan 9, 2023. It is now read-only.

Pass the current config location to the forked process upon ssh tunnel connection to kube-api #808

Open
ptla opened this issue May 16, 2019 · 0 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@ptla
Copy link
Contributor

ptla commented May 16, 2019

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:
On Tarmak 0.6.4, after successfully creating new cluster and tried to kubectl with:

tarmak -c . kubectl get pods

it complains that there is no local configuration found. The following goes in a loop:

tarmak -c . kubectl get pods
INFO[0000] generating terraform code                     app=tarmak module=terraform
DEBU[0000] created temporary directory: /var/folders/rw/6c489chs0n9gk_lz5fjdtr2m0000gn/T/tarmak-assets998847944
DEBU[0000] restored assets into directory: /var/folders/rw/6c489chs0n9gk_lz5fjdtr2m0000gn/T/tarmak-assets998847944
INFO[0003] initialising terraform                        app=tarmak module=terraform
DEBU[0004] Initializing modules...           app=tarmak module=terraform std=out
DEBU[0004] - module.state                                app=tarmak module=terraform std=out
DEBU[0004] - module.network                              app=tarmak module=terraform std=out
DEBU[0004] - module.tagging_control                      app=tarmak module=terraform std=out
DEBU[0004] - module.bastion                              app=tarmak module=terraform std=out
DEBU[0004] - module.vault                                app=tarmak module=terraform std=out
DEBU[0004]                                               app=tarmak module=terraform std=out
DEBU[0004] Initializing the backend...       app=tarmak module=terraform std=out
DEBU[0006]                                               app=tarmak module=terraform std=out
DEBU[0006] Initializing provider plugins...  app=tarmak module=terraform std=out
DEBU[0007]                                               app=tarmak module=terraform std=out
DEBU[0007] Terraform has been successfully initialized!  app=tarmak module=terraform std=out
INFO[0007] validating terraform code                     app=tarmak module=terraform
INFO[0012] request new certificate from vault (plenv-plenvcluster/pki/k8s/sign/admin)  app=tarmak
INFO[0015] new connection to bastion host successful     app=tarmak
DEBU[0015] active channel position recieved              app=tarmak cluster=hub environment=plenv module=vault
DEBU[0016] time="2019-05-16T11:19:21+01:00" level=fatal msg="unable to find an existing config, run 'tarmak init'"  app=tarmak destination=api.plenv-plenvcluster.tarmak.local tunnel=api.plenv-plenvcluster.tarmak.local
WARN[0019] ssh tunnel connecting to Kubernetes API server will close after 10 minutes of inactivity: https://127.0.0.1:54266  app=tarmak
DEBU[0019] trying to connect to https://127.0.0.1:54266  app=tarmak
WARN[0019] error connecting to cluster: Get https://127.0.0.1:54266/version?timeout=32s: dial tcp 127.0.0.1:54266: connect: connection refused  app=tarmak
INFO[0019] generating terraform code                     app=tarmak module=terraform
INFO[0020] initialising terraform                        app=tarmak module=terraform
DEBU[0021] Initializing modules...           app=tarmak module=terraform std=out
DEBU[0021] - module.state                                app=tarmak module=terraform std=out
DEBU[0021] - module.network                              app=tarmak module=terraform std=out
DEBU[0021] - module.tagging_control                      app=tarmak module=terraform std=out
DEBU[0021] - module.bastion                              app=tarmak module=terraform std=out
DEBU[0021] - module.vault                                app=tarmak module=terraform std=out
DEBU[0021]                                               app=tarmak module=terraform std=out
DEBU[0021] Initializing the backend...       app=tarmak module=terraform std=out
DEBU[0022]                                               app=tarmak module=terraform std=out
DEBU[0022] Initializing provider plugins...  app=tarmak module=terraform std=out
DEBU[0023]                                               app=tarmak module=terraform std=out
DEBU[0023] Terraform has been successfully initialized!  app=tarmak module=terraform std=out
INFO[0023] validating terraform code                     app=tarmak module=terraform
INFO[0029] request new certificate from vault (plenv-plenvcluster/pki/k8s/sign/admin)  app=tarmak
DEBU[0032] active channel position recieved              app=tarmak cluster=hub environment=plenv module=vault
DEBU[0033] time="2019-05-16T11:19:37+01:00" level=fatal msg="unable to find an existing config, run 'tarmak init'"  app=tarmak destination=api.plenv-plenvcluster.tarmak.local tunnel=api.plenv-plenvcluster.tarmak.local
WARN[0035] ssh tunnel connecting to Kubernetes API server will close after 10 minutes of inactivity: https://127.0.0.1:54313  app=tarmak
DEBU[0035] trying to connect to https://127.0.0.1:54313  app=tarmak
WARN[0035] error connecting to cluster: Get https://127.0.0.1:54313/version?timeout=32s: dial tcp 127.0.0.1:54313: connect: connection refused  app=tarmak
INFO[0035] generating terraform code                     app=tarmak module=terraform
INFO[0036] initialising terraform                        app=tarmak module=terraform
DEBU[0037] Initializing modules...           app=tarmak module=terraform std=out
DEBU[0037] - module.state                                app=tarmak module=terraform std=out
DEBU[0037] - module.network                              app=tarmak module=terraform std=out
DEBU[0037] - module.tagging_control                      app=tarmak module=terraform std=out
DEBU[0037] - module.bastion                              app=tarmak module=terraform std=out
[..]

What you expected to happen:
I would expect to use the local configuration from the env folder rather search in default ~/.tarmak directory

See:

t := tarmak.New(globalFlags)

cmd := exec.Command(binaryPath, "tunnel", t.dest, t.destPort, t.localPort)

How to reproduce it (as minimally and precisely as possible):
-Create new directory with your tarmak.yaml file. Leave ~/.tarmak empty.
-Do plan and apply on hub and cluster and then do tarmak -c . kubectl get pods

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): irrelevant
  • Cloud provider or hardware configuration**: aws
  • Install tools: tarmak
  • Others:
@jetstack-bot jetstack-bot added the kind/bug Categorizes issue or PR as related to a bug. label May 16, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants