If you use vagrant to maintain your dev recipes, then your natural prediliction might be to now move to supporting docker.
However, you don't want to lose the high level vagrant interface, do you? After all, using the vagrant idiom, you get.
- A well thought out interface for spinning up systems and tearing them down
- The support of the vagrant community, which is continually adding new plugins and bug fixes
- A strong decoupling of deployment of the machine from provisioning.
Step 0: Build a Dockerfile that is "vagrant friendly"
A vagrant friendly dockerfile is one which messages your base OS to allow for vagrant to easily SSH into it. This means starting ssh in a funny way (not using systemctl , because of some systemctl low level details that make it not work right in linux containers, not sure what they are - dont really care).
My Dockerfile for CentOS 7 looks like this. This is a little bit imperfect, but it works. For example, I need to change the ecdsa_key. The main elements here are that we are putting ssh keys into /etc/ssh/.... and MANUALLY starting sshd using /usr/sbin/sshd -D. Thanks to Tim St. Claire at Red Hat for pointing me in that direction.
Step 1: Create a new kind of Vagrantfile.
Vagrantfile's for docker instances are quite different. Your Vagrantfile will be responsible for building the base Image, launching the container, and provisioning.
Your Vagrantfile should look like this.
Essentially, your Vagrantfile just defines a few simple parameters (what machine to build, wether to keep it running, and wether or not to ssh into it (this is variable in docker instances, because sometimes you really dont want to ssh).
Finally ! Profit !
Now that your machine is vagrant friendly (i.e. it supports vagrant insecure ssh and starts the daemon), you can play vagrant with it. To launch, we just invoke
vagrant up --provider=docker
And we can see above that vagrant is happy to start the process and do some magic for us.
After we are done, we can then use vagrant to ssh into the machine the way we normally would, also using vagrant ssh.
In conclusion : Vagrant can give you a nice dev workflow on your docker instances, just treat your Dockerfile as if its a vagrant box .
PART 2 : Vagrant in the cloud.
Vagrant has made our lives wonderful for development.
But you don't need vbox/kvm/vmware etc..... to use it.
There are other ways to deploy vagrant boxes.
- Docker - OpenStack - EC2 - libvirt - and so on -
So what changes when you move off a local hypervisor ?
Setting up a vagrant box without a hypervisor changes things. There are 3 main differences.
BOXES
The notion of a box, which is a big disk image, formatted along side a json file with metadata so that vagrant can read properties and use them when loading the VM, is gone.
This is because cloud providers already have provisioners that do this for you - and the vagrant plugins themselves just need to translate your requirements (i.e. m1.large, this disk volume, this privatae ip address) into REST calls. Thus, your boxes will be tiny little .box files with a simple json file and nothing else.
So, for example, A normal vagrant box contains the following [Vagrantfile box-disk1.vmdk box.ovf metadata.json] Whereas, with the ec2 boxes, we only have a [metadata.json] file.
You can inspect this by running "ls" under your $HOME/.vagrant/boxes directory.
IP ADDRESSES
In order to get custom IPs, you now need to do some work in your cloud provider. For example, you may need to set up a virtual private network. Again, this is because vagrant isn't doing all the heavy lifting anymore. However, there is a downside to this. The asynchronous nature of the cloud means that if you rapidly provision and destroy, you might see errors about IP Addresses being already bound.
![]() |
If this ^^ happens dont worry, just wait a few seconds and try again. :) |
SECURITY
Now security matters, even if your building nonsense servers. Obviously, you dont want to provision anything in the cloud with the vagrant default key pair. Thus, you'll need to create a key pair by some means through your cloud provider, and then call the provision specific vagrant calls for it, to bind your local ssh key to vagrant ssh. Even if not using vagrant ssh, this is critical for vagrant to be able to properly provision/share folders etc.
SSH and SHARED FOLDERS
SSH and shared folders still work ! But they are a little different. For example, with SSH, you may need to embed some specific parameters into your Vagrantfile which helps your cloud hypervisor set things up properly. Meanwhile, with shared folders, vagrant plugins can use tools like rsync (rather than NFS) to get the job done.
ASynchronous Provisioning
Finally, when you provision your nodes, you will that all machines return from provisioning instantly, followed by polling (waiting for machines to become ready).
So... what does a vagrant file for the cloud look like?
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu_aws"
config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
config.vm.synced_folder "../.", "/vagrant", id: "vagrant-root"
config.vm.provider :aws do |aws, override|
override.ssh.private_key_path = "~/.ssh/vagrant-stack.pem"
aws.security_groups = ["sg-11111"]
override.ssh.username = "ec2-user"
aws.user_data =
"#!/bin/bash\necho 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty && chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty"
aws.keypair_name = "vagrant-stack"
aws.ami = "ami-10000"
aws.region = "ap-northeast-1"
aws.subnet_id = "subnet-abcdefg"
aws.associate_public_ip = "true"Private Networks
aws.private_ip_address = "172.31.16.10"
aws.tags = {
'Name' => 'centos-for-testing',
}
end
end
Setting up an EC2 VPC was tricky for me. Once you set one up, however, you can use the vagrant private_ip_address to gaurantee your servers can all internally talk to each other, and get provisioned in with totally static configuration files (i.e. since you know their IPs ahead of time).
Its hard to describe this, so I've attached a screenshot below. The things to make sure you get right here.
1) Each VPC is in the exact same availability zone as the EC2 instances.
2) Each instance is provisioned with the VPC defined ahead of time.
3) You create a SUBNET. Thats what tells the EC2 instance the VPC which its going to be on. The subnet is then referenced from your ec2 instance.
I know this is a terse explanation, but I think when coupled with the docks for AWS, it should give you some intuition about what to do.
![]() |
Also make sure you allow traffic inside your subnet by adding the right rules on top of the restricted default rule. |
So, heres a quick rundown of what you need to know when moving your vagrant setup to the cloud.
Finding boxes wont be so damn hard :)
Because of the fact that vagrant doesnt need to install the box on a hypervisor (this is done for you in the cloud). As an example, you can look at the json file managed file that is created as part of your box.
You still have to define a proper security group
Unlike private VMs on your laptops, on EC2, w/o security groups, ssh is borked. Make sure to add your security groups into your Vagrantfile, and create permissive ones.
You also need a pem file
The old vagrant public/private insecure key pair isnt safe in these parts! Make a pem file in the AWS interface, download it, and set its name to the corresponding aws.___ value in your Vagrant file. This is easy enough.
hack around TTY
You have to set up the cloud init file to bootstrap w tty. otherwise vagrant ssh fails. see https://github.com/mitchellh/vagrant-aws/issues/48 for details.
private networks
Private networks can be created in AWS and open stack. In either case, you need to tell vagrant how to access them. You have to play around in the subnet interface for this. In OpenStack and EC2 there are user interfaces for this (i.e. the "VPC dashboard" ).
No comments:
Post a Comment