18.12.18

Gcloud, Git, Kubectl

One might think Kubectl would run quite naturally inside  a container due to the InClusterConfig which smartly gets tried by the kuberentes client at startup.

However, looking closer, google has a very platform specific kubectl config, which actually wants your gcloud to be installed.

*** quick side note on why kubectl logins are non trivial: look at the format of a real kubeconfig file. *** 

You can use the CSR API, using a workflow described in this shell script https://github.com/gravitational/teleport/blob/master/examples/gke-auth/get-kubeconfig.sh , to also grab a cert that logs you into any arbitrary kube cluster.  But of course...  this also requires at least one time for you to have your gcloud login so that you can make this request.

So, while we're at it, we might as well take a quick look at a kubeconfig file...

apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ...
server: ...
name: k8s
contexts:
- context:
cluster: k8s
user: jaysbox
name: k8s
current-context: k8s
kind: Config
preferences: {}
users:
- name: jaysbox
user:
client-certificate-data: ...
client-key-data: ...
EOF

... Users are where it gets complicated ...

If you zoom into a kubeconfig file, you'll notice that each user stanza can be completely different, for example, minikube authentication is clearly moveable across platforms so long as you have the same kube crt/key credentials stored locally.  Meanwhile, gcloud's authentication is riddled with minefiends: paramaters, binaries that need to be installed, etc.

- name: gke_my_custer_3
  user:
    auth-provider:
      config:
        access-token: ya29.GltkBpz4aIegk******g4i5wNbQ
        cmd-args: config config-helper --format=json
        cmd-path: /Users/jayunit100/google-cloud-sdk/bin/gcloud
        expiry: 2018-11-30 19:19:18
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp
- name: minikube
  user:
    client-certificate: /Users/jayunit100/.minikube/client.crt

    client-key: /Users/jayunit100/.minikube/client.key

*** /end quick side note ***

Thus unless you do a proxy of some sort into your docker containers, to access a GKE cluster, you need gcloud in your container (unless you mounted svc accounts , i.e. your running that container in kube itself).

So , quickly, heres how im using docker to setup my dev environment in a container, on google, w/ git to boot.

First, run a deb/rpm based image, any image, mounting your ssh

 docker run -v /Users/jayunit100/.ssh:/root/.ssh -t -i mypythonimage:latest /bin/sh

This way you can do things like git clone.


Then, put Gcloud into it.

This is ridiculously easy... once you do it, you can use your kubernetes client from inside a container even if your not running the container in a Google created cluster.


curl -sSL https://sdk.cloud.google.com | bash

After you login, you have to open a link in they give you in the terminal , and put in your token that you get from the browser... 

Now, make sure you have kubectl 

RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

From there, you can run kubectl, git clone (via ssh creds) as needed to do all your hacking from inside a container, which will pave the way for a development environment which can minimally deviate from your production installation or dev environment (which , in 2018, is likely driven entirely from docker containers)  .

No comments:

Post a Comment