Why GlusterFS?
After spending some time looking into the crunchy data deployments, I found that they, by default, requested ReadWriteMany storage in their persistent volume claims.
I initially wondered why.. so I tried to hack this dependency away... and I ran some of the crunchy master replica deployments with a ReadWriteOnce filesystem (GCE persistent disks).
You can do this easily by using dynamic storage provisioning:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: crunchy-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
...
|
The result ?
The kubernetes scheduler in a multizone configuration gets PISSED, because it gets a VolumeZoneConflict. So.. the kubectl events stream, can only tell you so much... and not all kubernetes scheduling decisions are easy to debug, sometimes it helps to go look directly at the mind of the scheduler, to see why its being cynical:
https://github.com/kubernetes/kubernetes/blob/master/plugin/pkg/scheduler/algorithm/predicates/predicates.go
Digging into the scheduler, you see the following logic (or similar) tripping:
| if pvName == "" { | |
| return false, nil, fmt.Errorf("PersistentVolumeClaim is not bound: %q", pvcName) | |
| } | |
In otherwords : I was beggining to suspect that a ReadWriteOnce volume may not be particularly happy in this configuration, even the pod was happy with the storage type, the scheduler was not.
Part 1: Creating your GlusterFS Storage backend.
1) Create a Gluster cluster.
https://github.com/rimusz/glusterfs-gce
2) Configure the settings file with your GCE project name.
3) Now, you have a glusterFS clsuter. It runs on 3 nodes.
4) Now create some volumes
./create-volume.sh pg1
./create-volume.sh pg2
./create-volume.sh pg3
5) Now, create the endpoints inside your kubenerets cluster
kubectl create -f ./gfs_cluster_service.yaml
kubectl create -f ./gfs_cluster_endpoint.yaml
6) Now, create 3 persistent volume claims ...
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: CRUNCHY
spec:
accessModes:
- ReadWriteMany
storageClassName: glusterfs
resources:
requests:
storage: 1G
i.e.
sed 's/CRUNCHY/crunchy-pvc1/g' ./master-pvc.yaml | kubectl create -f -
sed 's/CRUNCHY/crunchy-pvc2/g' ./master-pvc.yaml | kubectl create -f -
sed 's/CRUNCHY/crunchy-pvc3/g' ./master-pvc.yaml | kubectl create -f -
7) Corresponding them to these storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfs
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://127.0.0.1:8081"
restuser: "admin"
secretNamespace: ""
secretName: ""
gidMin: "40"
gidMax: "50000"
8) And make sure you created at least 3 Volumes that match this class:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test1-data
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "gfs-cluster1"
path: "pg1"
Great ! Now you have a set of ReadWriteMany volumes that you can schedule your Crunchy nodes to...
In the end, you should be able to see your gluster volumes and volume classes, in your clusters.
So, part 9:
9) Now, checkout a fixed tag of crunchy-containers, and fire off a replicated crunchy cluster...
export CCP_IMAGE_TAG=centos7-9.5-1.5.1
export CCPROOT=`pwd`
cd ..//examples/kube/master-deployment/
./examples/kube/master-deployment/run.sh
2017-10-13 02:13:23 EDT [115]: [1-1] user=,db=,app=,client=LOG: pgaudit extension initialized
2017-10-13 02:13:24 EDT [115]: [2-1] user=,db=,app=,client=LOG: redirecting log output to logging collector process
2017-10-13 02:13:24 EDT [115]: [3-1] user=,db=,app=,client=HINT: Future log output will appear in directory "pg_log".
^C
➜ master-deployment git:(e39e947) kubectl get pods
NAME READY STATUS RESTARTS AGE
master-dc-3151449898-9qq3z 1/1 Running 0 37s
replica-dc-1560590133-xlk5d 1/1 Running 0 37s
replica2-dc-1560590133-86tl4 1/1 Running 0 36s
Then, wack the master pod (i.e. kubectl logs -f master-dc-3151449898-x776v) and you should see a new one pop up... very quickly, and there shouldnt be a bunch of create statements in the logs, because its reusing the storage and so on correcly:
+ source check-for-secrets.sh
++ '[' -d /pguser ']'
++ echo 'pgpguser does exist'
pgpguser does exist
REFERENCES
https://kubernetes.io/docs/concepts/storage/persistent-volumes/
https://raw.githubusercontent.com/kubernetes/examples/master/staging/volumes/glusterfs/glusterfs-pod.json
https://github.com/rimusz/glusterfs-gce/blob/master/kubernetes-examples/GlusterFS/gfs_cluster_service.yaml
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims
http://blog.kubernetes.io/2017/02/postgresql-clusters-kubernetes-statefulsets.html

No comments:
Post a Comment