Friday, March 11, 2011

Part 2: Continuous Deployment with Pinax and Jenkins

Part 1 is here.

So once Jenkins is running the code, we need a way to copy over that code to our staging server (we don't have a real production server yet). Our Jenkins user is the same as the staging server user, so it's simply a matter of copying things over in a script. If this were not the case, then we'd have to install the "Publish over SSH" Jenkins plugin and use that to copy things over and establish the symlinks.

Instead of having it as an additional step in the build process, our setup uses a separate task that is executed after the CI task is completed. However, we start off in the CI task's workspace so that we can clean up before moving things over.

Create a Jenkins Job:

Again, click "new job", select "build a free-style software project" and make sure the title has no spaces in it.

Job Settings:
  • Again, fill out a description. This is not going to do anything with Git or Github, so you don't need to fill out those sections.
  • In "Advanced Options", select "Use custom workspace" and put in the path to the CI task's workspace. The path is relative to the root Hudson folder (typically ".hudson"), so your path should be something like "jobs/[CI task name]/workspace".
  • Instead of polling, we will build after other projects are built. Check the box and type in the name of the CI task.
Build steps:

Step 1: Cleanup

Remember that this is starting from the CI task workspace, so the virtualenv should already be set up. We just use this to remove the pyc files before copying.

Step 2: Create new folder and copy

Simple script to create a new folder and copy. We use the environment variable "BUILD_TAG" to name our folders. It comes out as [task-name]-[build number].

Step 3: Start staging virtualenv and pull in external files

We use a separate virtualenv for the staging server, so we don't need .env anymore. We also copy a separate file and establish a symlink to the staging server's site_media folder.

Step 4: Update the staging environment's requirements and the database

We update the requirements using the same pip command from part 1. We then sync the database. We also use South for database migrations, so we also execute the migrations.

Step 5: Establish symlink and reload the code

We use mod_wsgi in Daemon mode, which means we don't need to restart the server once the code changes. mod_wsgi is using the "makahiki" symlink, so all we need to do to update the code is change the symlink. To be extra sure, we touch our wsgi script to make sure it reloads.

And that's it! We now have a project that polls Github, runs tests, and then deploys it. We can also rollback by changing the symlink to a previous build.

Part 1: Continuous Deployment With Pinax and Jenkins

I admire system admins. They do many things with scripts and commands that are a bit arcane to me. I had heard the term "Continuous Deployment" months ago back when Digg was going through their redesign. Continuous deployment, if you don't know, basically means a script updates code on a live server once a developer commits it. I thought it was an interesting idea, but I'd never be able to pull off such a thing.

Fast forward to today, where we have a Jenkins instance and now multiple developers. Being the lead developer/sys admin on this project by default (I was its sole developer for a while), it was up to me to set up continuous integration and then, if possible, pull off continuous deployment. This blog will describe setting up the CI task.

I had put our code into Hudson months ago, but I had forgotten about it and later found out it wasn't running. It was also having weird connectivity issues, so we figured this would be a great time to upgrade. During these past few months, other people have written much better blogs on how to get Jenkins running with Django/Pinax like Rob Baines. As it turns out, our Jenkins setup is not all that different from the setup described in Rob's blog. I'll lay out the steps and how we diverged from Rob's scripts. Rob's post is a great place to get a little more detail.

  • Jenkins (if you have a Mac, use homebrew and just 'brew install jenkins').
  • virtualenv ('pip install virtualenv')
  • Python 2.4 or higher
  • some kind of database (optional, by default we use SQLite3)

This guide, like Rob's assumes the host system is UNIX (Linux or Mac). Sorry Windows users.

Jenkins plugins:
  • We use Git, so we need the git plugin. You can also install the github plugin if you'd like (provides links to github).
  • Cobertura
  • I don't use the setenv plugin. Rob uses it to set up a path to the virtualenv, but I don't think it's necessary.
Create a Jenkins Job:

Click new job and select "build a free-style software project". Type in a project name and make sure it has no spaces.

Job Settings:
  • Put in a description, link to Github project (if using the Github plugin).
  • In source code management, select Git. Get your project's read-only repo url (no need to do a commit) and specify a branch to build (I don't know what "default" is, so I explicitly put master).
  • If using the Github plugin, you can fill out the repository browser (githubweb) and URL as well.
  • We set Jenkins to poll the repo every 5 minutes, which in cron syntax comes out as "*/5 * * * *"
Build steps:

Step 1: Create virtualenv if it doesn't exist

It's not all that different from Rob's, but since Pinax as of this writing (version 0.7.3) is not available in PyPi, we download the tarball from and install it. This does mean we're somewhat stuck with a certain version of Pinax unless we update it by hand.

Step 2: Install and update dependencies

Similar to Rob's, though I had dumped everything into a single requirements file. In the future, we might want to split up requirements based on whether it's a developer, Jenkins, or the live code. Also, I passed the -q flag to silence pip, otherwise you'd see all these lines in the console that say the requirements are satisfied.

Step 3: Update

Identical to Rob's. We may move to MySQL, but those database settings cannot be in source control. Instead, I'll probably put a different script on the server.

Step 4: Execute Tests

The only thing I changed was that I added some extra parameters to test for nosetests. --with-xunit will create a nosetests.xml for use in reporting test results and --exe tells nosetests not to skip tests that have executable permissions. As for an explanation about the coverage commands, I'll defer to Rob.

Post-build Actions:
  • Check "Publish JUnit report" and give it a path to the nosetests.xml file ("**/nosetests.xml").
  • Check "Publish Cobertura coverage report" and give it a path to the coverage.xml file ("**/coverage.xml").
And that's it for the CI task. So what about continuous deployment?