Nick SchuchOperations Lead
Drupal continuous integration with Docker
Continuous integration platforms are a vital component of any development shop. We rely on it heavily to keep projects at the quality they deserve. Being early adopters of Docker (0.7.6) for our QA and Staging platform we thought it was time to take our CI environment to the next level!
The platforms
So first a little bit about the platforms.
Out with the old
This original environment was powered by Jenkins which is awesome, Jenkins is a powerful CI with many community contributed plugins, however, we were running 2 environments per project. These were:
- PR - Environment to run test suite when pull requests are created on Github.
- HEAD - Environment to run test suite when commits are pushed to master.
This has caused a few issues, the main issue being we would occasionaly have left over files and services such as solr still indexed. This is a huge issue when it comes to ensuring consistent builds. Our other major issue was this infrastructure was on a single host which meant it didn't scale very well (if at all), builds would have to come to a stop for us to turn off the host and increase resources.
In with the new
We went into the development of this new infrastructure with the following goals:
- Leverage our existing Docker containers (QA/Staging environemnts) for platform consistency.
- Allow concurrent builds so we get a faster feedback loop on popular projects.
- Use a Jenkins master/agent configuration for scalability. If we have a busy month, no worries, we will add more agents.
- Leave current PR builds available for QA purposes while the Pull Request is still open.
- Provide ability to switch versions of PHP, Solr, RabbitMQ etc.
As you can see we had some ambitious goals, and we were able to achieve them!
The workflow
So let's look into this build process a little more. Some key takeaways from this is:
- Jobs are sent to a node on the cluster. Not run on the host itself.
- All builds are fresh environments with the services being started first to we can leverage Docker container links. What this means is, on every single build we are spinning up a new environment with no build artifacts that can tamper with results. The environments are also isolated, opposed to the old CI where we were using the same mysql instance for all the databases.
- We are running phing tasks so we have consistency in the commands that we run to test our code.
- Github pull requests get notified with a message that looks like the below. While this is something we have seen time and time again, I still think this is awesome.
Under the hood
So as you might have already guessed we are using Docker under the hood but what about the glue that holds it together. We utilize the following technologies:
- Puppet - We take advantage of the Docker puppet module to lock in our Docker version and ensure that we have all our containers pulled from the Docker Hub and ready to go. We also use puppet to define our builds for projects with Hiera data.
builds: pnx_pr: human_name: 'PNX: Pull request' description: 'Triggered by Github.' project: 'pnx' github_project: 'previousnext/pnx' application: 'previousnext/lamp55' steps: - 'phing prepare' - 'phing test'
- Nginx - We call this the "Router". This as a single point proxy to route to our built environments. This also provides a nice security layer.
- Jenkins - This is our "trigger man". It runs all our builds and is in charge of our nodes that we build on.
- Bash - These are bash scripts generated by Puppet. The scripts range from builds, to github commenting and container cleanup.
To get a good start at Jenkins and Docker go check out The Docker Book. It was released very recently and is a great source for getting started with Docker and how to intergrate it with Jenkins.
Below is a diagram of a slave used for builds. It depicts how all these technologies work together.
Conclusion
What we have achieved in such a short period of time is making a big difference. We now not only have more consistent builds (and everything else discussed above), we also have a CI framework that has opened up more than just testing possibilities (will cover in future blog posts). Until then if you are looking for a better way to do CI, I am happy to say, this is a great option.