Update !
I've posted a video on how we glued together our CI according to the diagram above for glusterfs-hadoop. In particular, it goes over:
- Setting up Slaves
- Maven version incrementing (when you dont use snapshots, you need to manually do this)
- How to skip (well, actually fail - havent got skip working) so that commits which are made automatically to increment a verision dont trigger the CI to run (infinite loop).
- How to deploy to s3 using maven deploy (See https://github.com/gluster/glusterfs-hadoop/pull/85/files) with environmental variables (no need for settings.xml if you use aws-maven).
POLLING broke / NO WORKSPACE
Upgrading Java recently broke jenkins polling. This lead us to a cryptic "The SCM for this project has blocked this attempt..." message. What happened? Well, it was pointing to an old JDK. The lesson : Remember that JDK on your slaves has to match Jenkins expected JDK path , or else slaves will fail.
GIT credentials.
Git can be tricky in jenkins. As you know, when cloning git repos, sometimes your asked to accept the github certificate. Since jenkins is a robot, this sort of thing can trip it up. To fix it , I've ssh'd into my build server, and manually done a pull inside the workspace in /var/lib/jenkins. Of course, there are other ways to workaround this (i.e. configure ssh to not prompt for certificates).
At that point, answering yes to the prompt will add github to authorized keys, and all your automated ssh based git pushing will work.
Running as ROOT
Not that this is EVER a good idea... But heres how you do it.
#Method 1) Modify JENKINS_USER in /etc/sysconfig/jenkins
JENKINS_USER=root
#Method 2) Directly modify /etc/init.d/jenkins
#daemon --user "$JENKINS_USER" --pidfile "$JENKINS_PID_FILE" $JAVA_CMD $PARAMS > /dev/null
echo "WARNING: RUNNING AS ROOT"
daemon --user root --pidfile "$JENKINS_PID_FILE" $JAVA_CMD $PARAMS > /dev/null
SUDO and TTY
Some builds might need to run commands as root, or else, run commands with SUDO. In either case , you need to edit jenkins so it can run as root or else so that it is a sudoer.
But wait ! Because jenkins runs builds on slaves, which may execute differently than your normal terminals (i.e. they don't have a TTY), you also better edit the /etc/sudoers file on your slave servers to ignore any TTY requirements.
Replace polling with url hooks
Jenkins can trigger a build just by hitting the "build" url:
http://<jenkins_server>:8080/job/<job_name>/build?delay=0sec
This can break easily. Either because a github url changed, or else, because polling stopped. Try to use a post-commit hook if you can (via github, you can easily put your jenkins build url as the post receive hook also : https://github.com/<yourname>/<your_project>/settings/hooks )
Post/Pre build Shell commands
So the pervasive problem with bash scripts: You never know where they are running.
- In jenkins, the "execute shell" command runs in the workspace. So for example, if your project is called "my_project", any bash commands you run /jenkins/workspace/my_project.
- State isnt maintained: If you set an environmental variable, for example, it wont be there in the next shell command. My workaround to this is using cat / EOF to write an env.sh script with environmental variables that jenkins owns (i.e. something like S3 deploy keys, build artifacts), and then sourcing that script.
Esoteric
ReplyDeleteGood Info
Cheers