- create an EncryptionConfiguration file
apiServer: extraArgs: encryption-provider-config: "/etc/kubernetes/enc/enc.yaml"
- mount in the KubeadmConfig or
-------- original post -------
NOTE: This isnt an automated solution, but rather just a verification that a one time migration to a encrptyed secret store will work on TKG. To automate this, you'd need to patch TKG's YTT files to create APIServers secret YAML through a YTT input parameter, and then additionally consume that YTT input parameter inside of the ./.config/tanzu/tkg/providers directory, which has the yaml templates used to make TKG clusters.
Anyways, that might be a follow on post if i get the time for it.
So, Today V Sent me over here https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/ and we decided to see if it works on TKG clusters. Heres what we found.
Ok, so evidently folks wanna do this. Lets see if we can get it to work ! First make the encrpyion config (
head -c 32 /dev/urandom | base64
- output= 37zjmOs51+qEr1VY6nRInoi0PWDo7fOUfuUubh/aejY=
Per the K8s docs https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/
2. Then jam it into /etc/kubernetes/enc/
sudo mkdir /etc/kubernetes/enc/ ; sudo chmod -R 777 /etc/kubernetes/enc ; vi /etc/kubernetes/enc/enc.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: 37zjmOs51+qEr1VY6nRInoi0PWDo7fOUfuUubh/aejY=
- identity: {}
3. Ok, now patch the apiserver file:
scp capv@10.170.69.164:/etc/kubernetes/manifests/kube-apiserver.yaml .
vim kube-apiserver.yaml
diff --git a/kube-apiserver.yaml b/kube-apiserver.yaml
index d3eece0..34a4579 100755
--- a/kube-apiserver.yaml
+++ b/kube-apiserver.yaml
@@ -13,6 +13,7 @@ spec:
containers:
- command:
- kube-apiserver
+ - --encryption-provider-config=/etc/kubernetes/enc/enc.yaml
- --advertise-address=10.170.69.164
- --allow-privileged=true
- --authorization-mode=Node,RBAC
@@ -78,6 +79,9 @@ spec:
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
+ - mountPath: /etc/kubernetes/enc
+ name: enc
+ readonly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
@@ -99,6 +103,10 @@ spec:
seccompProfile:
type: RuntimeDefault
volumes:
+ - name: enc
+ hostPath:
+ path: /etc/kubernetes/enc
+ type: DirectoryOrCreate
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
Now COPY IT BACK ....
scp kube-apiserver.yaml:capv@10.170.69.164:/etc/kubernetes/manifests/kube-apiserver.yaml
That will result in the apiserver AUTOMATICALLY restarting... (bc the kubelet is monitoring that directory) so, you dont have to do anything else.
4. ssh into the capv@ node above , and run
alias etcdctl=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/14/fs/usr/local/bin/etcdctl
etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key get /registry/secrets/default/secret1 | hexdump -C
Youll see a discusting array of letters that MAKES NO SENSE and it has something like aesbc
in it
NOW FIX ALL YOUR SECRETS.........
kubo@b1UCJZt7wX25Y:~$ kubectl get secrets --all-namespaces -o json | kubectl replace -f -
secret/ako-sa-token-6hf72 replaced
secret/avi-secret replaced
secret/default-token-jwjc2 replaced
secret/default-token-ts66k replaced
secret/secret1 replaced
secret/default-token-6rqpl replaced
secret/default-token-58gcs replaced
secret/antctl-token-7f4j8 replaced
secret/antrea-agent-token-tlb96 replaced
secret/antrea-controller-token-nkqjg replaced
secret/attachdetach-controller-token-499p5 replaced
secret/bootstrap-signer-token-cpxh2 replaced
secret/bootstrap-token-h9dj4z replaced
secret/certificate-controller-token-ts45v replaced
secret/cloud-controller-manager-token-8tph4 replaced
secret/cloud-provider-vsphere-credentials replaced
secret/clusterrole-aggregation-controller-token-bltfk replaced
secret/coredns-token-qs6mz replaced
secret/cronjob-controller-token-xj8qh replaced
secret/daemon-set-controller-token-dwmwf replaced
secret/default-token-2ql87 replaced
secret/deployment-controller-token-khvlw replaced
secret/disruption-controller-token-77pwp replaced
secret/endpoint-controller-token-mbtm9 replaced
secret/endpointslice-controller-token-2ftd7 replaced
secret/endpointslicemirroring-controller-token-q65rk replaced
secret/ephemeral-volume-controller-token-6dp8k replaced
secret/expand-controller-token-blpf6 replaced
secret/generic-garbage-collector-token-5dq5j replaced
secret/horizontal-pod-autoscaler-token-mhpt7 replaced
secret/job-controller-token-c9ftk replaced
secret/kube-proxy-token-rlsbl replaced
secret/metrics-server-token-rn6ff replaced
secret/namespace-controller-token-wzl22 replaced
secret/node-controller-token-89z4t replaced
secret/persistent-volume-binder-token-rzdwm replaced
secret/pod-garbage-collector-token-m5rzc replaced
secret/pv-protection-controller-token-gdljq replaced
secret/pvc-protection-controller-token-swsbc replaced
secret/replicaset-controller-token-hphhn replaced
secret/replication-controller-token-4mz8p replaced
secret/resourcequota-controller-token-gj9wf replaced
secret/root-ca-cert-publisher-token-c9cxf replaced
secret/service-account-controller-token-6glgt replaced
secret/statefulset-controller-token-rngzp replaced
secret/token-cleaner-token-2fdjj replaced
secret/ttl-after-finished-controller-token-d7ptc replaced
secret/ttl-controller-token-j5d97 replaced
5.
DID I BREAK CAPI MONITORING
Now this is kinda an interesting problem bc CAPI actually watches etcd. So this made me wonder, is there a possibility that changing the way secrets are encrypted results in the CAPI Management cluster not being happy w/ etcd on the WL cluster anymore ?
To check that, we just to a quick tanzu cluster get on the WL cluster, and we can see its still healthy.
Now, as a way to intentionally break the WL cluster, (to confirm that , when etcd isnt reachable, we see a signal, mv /etc/kubernetes/manifests/etcd.yaml to /tmp. and then in a few minutes youll see in the capi controller manager logs capi-controller-manager-59c98f9df8-6wvfk
And then, move it back (mv /tmp/etcd.yaml /etcd/kuberentes/manifests/etcd.yaml) ... And youll see capi becoming happy again ! Thus confirming
Capi controller manager is monitoring etcd
Capi controller manager detects if etcd isnt reachable through its apiserver proxy
Capi controller manager isnt broken , bc, by simply keeping your apiserver alive, you can rescue capi controller manager to report that its all happy.
Now, after all that unnecessary drama, make sure your cluster is still up
Ok ! So we can eaisly swap out our clusters etcd encryption status on the apiserver, and as long as we put the base64 key into the corresponding apiserver, were all good.
NOTE If your upgrading clusters, youll need to redo this process bc this is not a native TKG feature... we could however, do some YTT / ClusterClass magic to someday support this)
No comments:
Post a Comment