19.10.17

Get that damn Alpine postgres image running in openshift

Although the crunchy-data and rhsc images for postgres work, another popular alternative is alpine, where you cannot use the NSS_WRAPPER utilities.

Although there are scattered hints about modifying SCCs and so on to postgres or other images that need special UIDs working on openshift, nothing has been done that comprehensively describes how to do it, all in one place.

So here goes - here's an example of how to run a postgres container, based off of alpine (or other non-rhel-ish images), on your regular old openshift origin cluster - without completely insuring the entire cluster space.

1) First, you can setup an openshift cluster with a working oadm https://github.com/dwmkerr/terraform-aws-openshift .

2) Get into your cluster as admin:

ssh -t -A ec2-user@$$(terraform output bastion-public_dns) ssh 

followed by

[ec2-user@ip-10-0-1-71 ~]$ oadm

3) Create a project where your 'special' container is going to live (i.e. the project that needs postgres)


 oc new-project hub

4) Now, make a service account for that namespace:   Note this is (as of step 4) useless - its just a 'bucket' where we will put the special postgres specific needs in the next step .

oc create serviceaccount postgresapp -n hub

5) Now that the account exists for the hub namespace -- lets do a privileged action, that is - lets make it so that anyone requesting the postgresapp serviceAccount that is in the 'hub' namespace is able to run as any user...

oc adm policy add-scc-to-user anyuid system:serviceaccount:hub:postgresapp
oc adm policy add-scc-to-user anyuid system:serviceaccount


5) Ok.  So, now, if we try to use postgresapp in the namespace hub as the serviceAccount for a pod, it will have a special capability.. Lets try it out !  Running an alpine postgres image, we get:

Server-based private key: /tmp/hub-database.key
Server-based certificate: /tmp/hub-database.crt
Filebeat started successfully
Attempting to start Hub database.
initdb: could not look up effective user ID 1000070000: user does not exist

6) What happened ? Postgres spun up with a random UID. We need to make it so that it spins up as the postgres user (26).

pods "postgres-2211615528-" is forbidden: unable to validate against any security context constraint: [securityContext.runAsUser: Invalid value: 26: UID on container postgres do
es not match required range.  Found 26, required min: 1000070000 max: 1000079999]


7) Ok.. so how do we do that > we need to update our security context constraint !

      spec:
        volumes:
        - name: postgres-persistent-vol
          emptyDir: {}
        securityContext:
          runAsUser: 26
        serviceAccountName: postgresapp

8) Now, the good news is, we can start as user 26, however, "initdb" isn't happy - because it can't 'find' user 26 anywhere in the system.  This is because it wants to see the user ID in /etc/passwd.  So, for those of you that are willing to make that change in your postgres container (i.e. via an init-pod), you can fix that issue, and then you'll have a working postgres.

user ID 26: user does not exist.


For the rest of you that don't mind running postgres as root.

You can just go ahead and do this:

9)  Turn root on.  Now postgres can do whatever it wants to.

          serviceAccountName: postgresapp
          securityContext:
            runAsUser: 0
10) Now, postgres is much heappier, it can find user 0 - and thats enough for it to do what it needs to do... whatever it wants.   However, in my case, I wanted a volume abstraction of EmptyDir.  To do this, SELinux blocked us from certain operations on the /tmp level and so on.  So the last thing you need to do, is fixup the container SELinux labels, so that the kernel doesn't block it from moving files around in a natural way.. 

            privileged: true
            seLinuxOptions:
              level: "s0:c123,c456"

So, putting it all together: 

A production postgres deployment on openshift can be based off of alpine linux, if you set it up like so (this is root based) .
- apiVersion: apps/v1beta1
  kind: Deployment
  metadata:
    name: postgres
  spec:
    replicas: 1
    template:
      metadata:
        name: postgres
        labels:
          app: postgres
          tier: postgres
      spec:
        volumes:
        - name: postgres-persistent-vol
          emptyDir: {}
        serviceAccountName: postgresapp
        containers:
        - name: postgres
          image: blackducksoftware/hub-postgres:4.2.1
          resources:
            requests:
              memory: 3072M
              cpu: 1
            limits:
              memory: 3072M
              cpu: 1
          serviceAccountName: postgresapp
          securityContext:
            runAsUser: 0
            allowPrivilegeEscalation: true
            privileged: true
            seLinuxOptions:
              level: "s0:c123,c456"
          envFrom:
          - configMapRef:
              name: hub-config
          volumeMounts:
          - mountPath: /var/lib/postgresql/data
            name: postgres-persistent-vol
          ports:
          - containerPort: 5432
        nodeSelector:
           blackduck.hub.postgres: "true"


1 comment:

  1. FYI, for protoform based blackducks, this is all automated now :)... Quick note on how to get SA's so you have god privileges:

    oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:bd-operator:blackduck-protoform

    ReplyDelete