Its hard to admit it, but yeah, GKE really is awesome - no matter how much you know about kubernetes - its probably done things that you would have to work pretty hard to do on your own.
In particular:
- declarative management of kubectl via 'gcloud container clusters ...' commands).
- push button high availability and GCE services ACLs (i.e. cloudsql and so on).
- stackdriver logging which gives you a way to do poor mans metrics.
- the best kube loadbalancer in the business - google's fancy packet forwarder.
We'll go over this stuff in this post.
A production kubernetes deployment on GCE is push button . - but what do you do after you push the button ?
Your first kubernetes cluster: probably looked something like this, if you used kubernetes.io.
\
When you go to container engine, however, your VM's will be anonymous. Its an HA system, so there isn't a clear division between the masters and the workers.
To see the GKE native view of things, rather then looking at your VMs, grab the GKE interface.
Now, setup kubectl by grabbing the 'Connect to the cluster' credentials it gives you. The first bit will be a cloud command,
gcloud container clusters get-credentials cluster-1 --zone us-central1-a --project gke-verification
Fetching cluster endpoint and auth data.
kubeconfig entry generated for cluster-1.
And then, you'll have a kubectl that is hardwired to talk to the cluster you just got credentials for.
➜ ~ kubectl config view | grep current-context
current-context: gke_gke-verification_us-central1-a_cluster-1
Kubectl proxy will then setup a cluster administration endpoint for you, at your localhost://8001/ui endpoint... You'll see no pods or deployments running in the default namespace.
Multiple clusters
Managing one kubernetes cluster is easy enough this way, but in order to manage a production kubernetes system, you'll likely have multiple clusters. This is the current idiom while we wait for kubernetes to continue to mature into a full fledged data center technology that has advanced scheduler defragmentation, user multitenancy, and IAM support. For example, in Kops there is a tremendous about of liberty that the API server has to create new AWS services that may be very expensive, and so on. Thus, you may want to fence your clusters for different users so you know whats going on when large container loads / disks are provisioned / cloud LB endpoints are created / etc...
So, to switch kubernetes contexts, simply lookup the various clusters in your kubectl file...
➜ ~ kubectl config view | grep -A 2 context
contexts:
- context:
cluster: gke-verification_kubernetes
user: gke-verification_kubernetes
--
- context:
cluster: gke_gke-verification_us-central1-a_cluster-1
user: gke_gke-verification_us-central1-a_cluster-1
--
current-context: gke_gke-verification_us-central1-a_cluster-1
kind: Config
preferences: {}
And use kubectl set-context ____.
Poor man's metrics with log counters/
If you turn on google's native logging solution, you can create custom logging metrics, where you count the amount / frequency of logs slurped from your machines using stack driver.
This will allow your business to continue using logging data in a cloud native manner - such that things are aggregated in a machine independent and highly dynamic way - all from your console.
To see how easy the cloud LB is to use, just use kubectl expose on any service. I won't even bother showing an example b/c its dead simple. It will give you a public GCE endpoint that actually preserves your client IP and forwards packets into the respective service for you.




No comments:
Post a Comment