Ansible's playbook concept is fundamentally opposite from the way we normally use vagrant to provision. A playbook occurs on a group of machines, and multiple plays are used to comprise a running system. When vagrant creates our VMs, it usually runs the provisioner code as part of the creation. So, to provision in waves (i.e. first create VMs, then set them all up with a few basics, and then run another set of tasks on all of them, and so on), ansible is a natural fit : It allows you to reference machines by their groups, and define the "plays" which different machines run. Since this is a little awkward for vagrant, it took me a little bit to figure out how to do it right.
First, the basics of how vagrant and ansible are synergized:
- Vagrant autogenerates an ansible inventory for you, so you can control your inventory in the same place as your infrastructure setup code.
- Vagrant can call ansible as a provisioner, provided ansible is installed on the local (HOST) ssystem which is running "vagrant up".
undefined variables: 'dict object' has no attribute 'ipv4'"I was shocked ! I assumed this was a networking error, and got ready to dive into SELINUX, firewalls, and cloud network debugging... But alas this was not a fundamental error in my VMs... it was NOT happening when running ansible with default options.
So, to properly accomplish (2), we often want vagrant to behave the way ansible normally does : by running a playbook on all hosts using the mappings provided.
Ansible provisions in WAVES, Vagrant provisions one at a time.
For example, a typical ansible run will show the same task being run on multiple machines at once.
Thus, the default for vagrant is the opposite of what we normally do in ansible : provisioning normally happens "one provisioner at a time". This leads to confusion for ansible when doing things like figuring out
Thus, if you're ansible recipe expects all machines to be "set up" from one task before going on to the next one - it will fail if you use a --limit option, which turns the "wave" behaviour off and serializes ansible tasks to run all in order, on one machine at a time, rather than in "waves" on all machines at a time.
The simple fix is to run your vagrant ansible provisioners like so :
In tha above, you can see that I've defined "groups" which is simply map of the machines which my Vagrantfile is creating (we actually should have read the "nodes" dynamically rather then specifying them as a static array, but you get the point).
Anyways, the only thing we're doing here is specifying the ansible.limit="all" directive. This just makes it so that the ansible --limit is not enabled to only run on one machine at a time.
Lesson learned: When running a vagrant provisioner, make sure its defaults match exactly to the defaults of the provisioner when run alone, if you require those defaults.
In general to inspect this, you can run
VAGRANT_LOG=info vagrant provision
This should (for most provisioners, at least) at some point, actually print out the EXACT invocation of ansible (or puppet, or whatever) of the shell command that is running .


No comments:
Post a Comment