5.8.17

Inject a whole file as a config-map into kubernetes.

When configuring or modifying config for pods, you have to choose between.

- jigging command line options, so you dont have to edit a config file.
- rebuilding a container to have dynamic config pulled in from somewhere.

These are both totally annoying.  Kubernetes allows you to inject configuration files via configmaps.

However, the distinction between key/value configs and file configs is tricky to figure out the first time, but super easy.

PARAMETERS Injected as Environment variables.

To create a config map that injects individual parameters as environment variables, you do something like this:

kubectl create -f mymap.yaml

Where mymap.yaml looks like this:

apiVersion: v1
kind: ConfigMap
metadata:
  name: hub-config
  namespace: default
data:
  HUB_POSTGRES_ENABLE_SSL: "false"
  HUB_POSTGRES_USER: "blackduck.user"
  HUB_POSTGRES_ADMIN: "blackduck"
  HUB_WEBSERVER_PORT: "8443"
  PUBLIC_HUB_WEBSERVER_PORT: "443"
  HUB_WEBSERVER_PORT: "8443"
  IPV4_ONLY: "0"



DIRECTORYS OR FILES Injected as Mounts.

Sometimes rather then injecting your config as env vars, you want to do a whole file as a big chunk.  To do this, using --from-file is super easy:

kubectl create configmap hub-logstash-grafana --from-file=/usr/share/logstash/config/

In this case, you'll see a different internal formatting, where, rather then k/v pairs, you just get a raw file:



In this case, place a file called "logstash.conf" or whatever, into the above directory on your host.

Then, the entire contents of all the files in the above directory will be injected.  To get them to the right place, you volume mount them:

Debugging

If you cordon all your nodes but one, or use a label selector, you can isolate a cluster to a single node cluster, and then monitor your kubelet logs, to see how config maps are being mounted.

If you look at kubelet logs, you'll see something like this:

Aug 05 01:04:44 ip-10-0-26-84 kubelet[30344]: I0805 01:04:44.963510   30344 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/configmap/a47ebff8-7976-11e7-8369-12207729cdd2-dir-graphite" (spec.Name: "dir-graphite") pod "a47ebff8-7976-11e7-8369-12207729cdd2" (UID: "a47ebff8-7976-11e7-8369-12207729cdd2").

And then, you can find the corresponding mount point in your kubelet:

ls /var/lib/kubelet/pods/a47ebff8-7976-11e7-8369-12207729cdd2/volumes/kubernetes.io~configmap/dir-graphite/..8988_05_08_00_40_13.463145014/

Finding it in your pod?

The debugging pathway for volumes is:

    Mounts:
      /tmp/x from dir-graphite (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-2t0cl (ro)

Succesfully defining volumes is not the same as defining the VolumeMounts.  If your volume mounts are setup correctly, you should be able to see them running docker inspect like so:
        },
        "Mounts": [
            {
                "Source": "/var/lib/kubelet/pods/bf655bb7-7985-11e7-8369-12207729cdd2/volumes/kubernetes.io~configmap/dir-graphite",
                "Destination": "/tmp/x",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },

Corresponding to :

 31         containers:
 32         - image: blackducksoftware/hub-logstash:4.0.0
 33           name: logstash
 34           volumeMounts:
 35           - name: dir-graphite
 36             mountPath: /tmp/x 

No comments:

Post a Comment