...THIS IS A WORK IN PROGRESS ...
Setting up Sahara on a local environment is quite easy.
1)First get a Machine spun up with open stack. I used the recipe for RDO. Note its best to use the --default-password=1234 option, so that all services (including mysql) have easy to remember default passwords.
2) Then you can follow the directions here , http://docs.openstack.org/developer/sahara/userdoc/installation.guide.html to set it up and get it running. However... If you aren't installing a MySQL database (as the metadata server for sahara), then the default (sqlite) will fail when running the sahara database creation command. As a workaround, I commented out one of the functions which is run during the database creation (database creation == database upgrade for first time users).
Note that if you want to push through w/ SQLite, you can coment out the upgrade() function in this python file: /usr/lib/python2.7/site-packages/sahara/db/migration/alembic_migrations/versions/007_increase_status_description_size.py.
| Now on to setting up sahara |
So our strategy now, will simply be to:
- Use sahara's python client libraries to spin up hadoop clusters, with one caveat
- We also need to configure sahara with information about images. since thats not in the blog post above - we will do some hacking at the python REPL to see how we can use the openstack python API to call the REST services for us.
REST API Documentation for Hadoop Job Submission
Understanding the Python Sahara Client
The REST API for Sahara's Image Registry, needed before you start a cluster.
Getting started with the python API. Note that until you do the steps below, some calls wont work .
To do this, I used elmiko's recipe above. The first test I ran at the python terminal worked nicely.
Launch a python shell, after having installed all sahara services.
~ from elmiko.py import *
~ result = c_2.cluster_template_vanilla24()~ result
{'name': 'vanilla24', 'cluster_configs': {}, 'plugin_name':
'vanilla', 'node_groups': [{'count': 1, 'floating_ip_pool' ...
Great! We can use the function's above to make a cluster template, that will be sent over the wire to sahara. This is alot easier than clicking all those buttons in the sahara UI.
357 keystone endpoint-create --service sahara --publicurl "http://127.0.0.1:8386/v1.1/%(tenant_id)s" --adminurl "http://127.0.0.1:8386/v1.1/%(tenant_id)s" --internalurl "http://127.0.0.1:8386/v1.1/%(tenant_id)s"
|
And for details, you can check out red hat reference docs on setting up keystone permissions for sahara. Sahara reference impl .
Now, you will also want to make sure
- Sahara can read/write to a database in MySQL (or whatever RDBMS you are using). With packstack, MySQL is easiest since its default, and SQLite is now deprecated for sahara. Do this my creating a mysql url, such as "mysql://root:1234@127.0.0.1:3306/saharaj", and follow that by, of course, creating your database using mysql (mysql> create database saharaj).
- Note that the root password for mysql is created when you run openstack installation in packstack. So, for example, if you provide the --default-password=1234 option, packstack will make your mysql password 1234.
- Make sure that auth_protocol=http (NOT https) in /etc/sahara/sahara.conf
- Add the admin sahara credentials, again this relates to the defaulte-password you setup if you were using packstack to create all your serices at once.
- # Keystone account username (string value)
admin_user=admin - # Keystone account password (string value)
admin_password=redhat - Logging is turned on in sahara : Do this by updating the log_dir in /etc/sahara/sahara.conf.
systemctl start openstack-sahara-all.service
To test if sahara is now able to talk correctly w/ keystone credentials to the openstack infrastructure, run:
- s = sahara_client()
- s.plugins.list()
Go and get you some Sahara hadoop ready images.
Here is a batch of themon the sahara website. You can then add them in via the openstack ui.
FINALLY - on to setting up hadoop clusters - and templates - TO BE CONTINUED !

No comments:
Post a Comment