14.7.17

Helm ON

Setting up helm on a secured kubernetes cluster.

Thanks to the heptio folks for helping me get this working by following up on https://github.com/heptio/aws-quickstart/issues/75 so quickly !

Setting up helm is easy on a insecure kubernetes.  On a more secure / sophisticated RBAC setup, you will see something like this:

[storage/driver] 2017/07/14 18:12:20 list: failed to list: User "system:serviceaccount:default:default" cannot list configmaps in the namespace "default". (get configmaps)


ARGGG WHATS GOING ON WHY CANT HELM ACCESS THE INTERNAL API?

Helm's deployment controller does something you probably don't normally need to do in your every day pods: Access the pods and config maps in various namespaces.  In this case, default, but the same result will happen in many clusters for kube-system, and so on.

So, you have to fix this by launching helm with a special service account.

helm init --service-account helm

Obviously this won't just work, b/c you need to create the account... and place it in the kube-system namespace, which is the place where helm (by default) will try to create its pods:

i.e. if you look at kubectl get events, you'll see:

kube-system   2017-07-14 21:37:12 +0000 UTC   2017-07-14 21:34:28 +0000 UTC   16        tiller-deploy-4247546325   ReplicaSet             Warning   FailedCreate   replicaset-controller   Error creating: pods "tiller-deploy-4247546325-" is forbidden: service account kube-system/helm was not found, retry after the service account is created


So stop screwing around, and create this Service account, which allow the service account we made above a reality :)...

linuxbox@ip-10-0-22-242:/opt$ cat ~/helm.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: helm
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: helm
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: helm
    namespace: kube-system

Go ahead and run kubectl create -f helm.yaml... and you'll see helm come up properly.

Setting up helm on heptio or any other cluster that secures service accounts by default means you have to create a specific service account, for helm, and then reference it in a ClusterRoleBinding clause.

linuxbox@ip-10-0-22-242:/opt$ # kubectl delete deployment tiller-deploy --namespace=kube-system ; kubectl delete service tiller-deploy --namespace=kube-system ; rm -rf ~/.helm/


linuxbox@ip-10-0-22-242:/opt$ # helm-exec init --service-account helm

PS I've created an upstream issue for helm to make these errors easier to debug, as of now, without RBACs, the helm client just gives you cryptic errors about the inability to make service API calls.

One other thing.

If you screw up your first helm install, you can delete it like so:

kubectl delete deployment tiller-deploy --namespace=kube-system ; kubectl delete service tiller-deploy --namespace=kube-system ; rm -rf ~/.helm/


With helm installed, you can productionize your kube cluster with metrics etc... 

First, follow the prometheus helm chart instructions, and then expose the master:

kubectl expose --namespace=default deployment waxen-crocodile-promethe --type=LoadBalancer --port=9090 --target-port=9090 --name=prom-gateway2

And then you will have an external URL, you can then launch grafana:

/opt/helm-exec install stable/grafana

At that point, you can connect your prometheus metrics to the grafana dashboard:


Now, you can make your own dashboards as well,





3 comments:

  1. This is very helpful. Thank you!

    ReplyDelete
  2. very helpful, thanks ! One question however, where is your waxen-crocodile-promethe deployment created?

    ReplyDelete
  3. Btw, I wrote a post based on this. https://medium.com/@maniankara/helm-on-gke-cluster-quick-hands-on-guide-ecffad94b0

    ReplyDelete