Setting up CRIO
There are a few ways to do this: Minkube/Minishift, and https://cloud.openshift.com/clusters/install . For the latter, you'll need a red hat developer account, and a route 53 access DNS name on EC2.
Minikube
Minikube can be started with a --container-runtime option for CRIO, however, you might run into issues where emptyDir doesnt work right, where /sys permissions are wrong (this requires a node restart ).
For me , it wasnt too bad to get this running, I just had to follow the directions in minikube upstream...
minikube start \
--network-plugin=cni \
--extra-config=kubelet.container-runtime=remote \
--extra-config=kubelet.container-runtime-endpoint=/var/run/crio.sock \
--extra-config=kubelet.image-service-endpoint=/var/run/crio.sock \
--bootstrapper=kubeadm
But, afterwards, I did have to ssh into my cluster to change the way the registry file works. This felt a little off, I figured the defaults would try to be , as much as possible, similar to what the docker daemon already does, but no big deal.
However, I did see something else: Permissions on my /sys directory were claimed to be borked. I assume this can happen in cases where crio start up comes after kubelet startup ? But not sure. Either way, the directions in the kubernetes event logs were docker specific, and suggested a daemon restart. Rest assured, it was fixed for me by a restart of CRIO. Assuming this is all related to cAdvisor somehow, but that also is a guess (i.e. maybe crio just wraps results from info parsed out of cAdvisor for some reason, maybe docker does this as well)
I filed an issue for this in upstream crio for this (https://github.com/kubernetes-sigs/cri-o/issues/1970).
The error messages in kube are a little inaccurate here, so I made a PR to fix this upstream ~ https://github.com/kubernetes/kubernetes/pull/72064 . The fix is really, simple... Specifically, its a one liner, but I think its pretty important now that folks are experimenting with non docker approaches to kubernetes :)
So again, as we can see, we're getting there, but even kube itself still has alot of docker specific error messages and logic in the codebase that will need to get cleaned up over the next year.
In the cloud
Google Cloud
Pretty shockingly, the developer preview has no support for any cloud other then EC2/Openstack. To run on GCP, you need to build a custom image which supports virtualization nesting ! I didnt go down this road b/c it seemed like overkill, and isnt directly supported by the openshift 4 installation options (although libvirt is).
See https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances for details, its not hard, but again, not worth it while we're still in tech preview for openshift 4 IMO. Hurry up give us a native Google solution red hat :) :)
EC2
Red Hat's developer preview for openshift 4 seems streamlined. Openshift ansible is now not necessary, which makes it easier to setup the installer (no python magic required, just a binary, clearey go is taking over there). In any case, it looks something like this when you run the binary, very sleek: and it asks you questions and lets you hack intermediate steps.
Getting your containers named right, so crio can pull them.
For blackduck, we deploy a postgres container which originates from the rhsc catalog registries, which isnt a default pullable registry in crio. Youll want to set registries in crio so that there is at least one default registry to search, like this:
registries =
“gcr.io”,
“registry.access.redhat.com”,
“docker.io”
]
Do this in the /etc/crio/crio.conf configuration file...I saw some instability, still in launching containers on my clusters, but that may be likely to the fact that I was running crio from clusters that were built from source.
In any case, once you set up your crio registries, you should be able to run any container, and the docker pull functionality should work correctly to run your app as normal.
In my case , I used the blackduck operator to deploy images onto crio clusters using internal kubernetes APIs, and it seemed happy.
Although I wasn't able to completely deploy my app due to various complaints about the /sys directory not being writeable, I think that a stable, supported openshift or kuberentes distribution which runs a stable version of CRIO should work , as advertised, the same as a docker based kubernetes does.
Just make sure you add default registries beforehand ! It took me a while to figure this out.
For large clusters, given my initial experience with CRIO, I'd wouldn't quite support it yet - it doesnt seem like it naturally will always work with any docker image as easily as the regular old docker daemon does, and I'm not sure what security expectations it might have.
Additionally, Im a little afraid that having to install podman or skopeo every time i want to exec directly from a node is a little heavyweight. I'll wait for an end to end docker alternative before telling customer to run CRIO in production.
Images !
One of the tricky things here that you'll have to accept is that, after you move off of docker, you'll need tooling to play with layers and images that you took for granted. Like so many other opensource projects, CRIO is built to be composable, not to be an end to end product. So look into getting a firm understanding of podman, as well as skopeo, to round out your ability to deal with the myriad of container specific solutions you'll need to use when debugging things in real time.
In particular, Skopeo will give you docker style image inspection without needing a daemon. Thank god for this. We will be powering Blackduck's OpsSight product on Skopeo as a container export tool, so it runs in non docker-daemon clusters.

No comments:
Post a Comment