Running Continous Integration on a Shoestring with Docker and Fig

One of the things I love about Continuous Delivery (CD) is the "Show, don't Tell" aspect of the process. While we can often convince a customer or coworker what's the 'right thing to do', some people are harder to sell, and nothing beats a demonstration.

The downside of Continuous Delivery is that, on the face of it, we use a lot of hardware. Multiple copies of multiple servers all doing nominally the same thing if you don't understand the system. Cloud services are great for proving out the system due to the low monthly outlay, but not all organizations allow it. Maybe it's a billing issue, or concern about your source getting stolen, or in an older company it may be a longstanding IT policy. If a manager believes in the system, they may be willing to stick their neck out and get paperwork signed or policies changed. But how do you get them on board in the first place? This chicken and egg problem has been bothering me for a while now, and Docker helps a lot with this situation.

Jenkins in a Box

The thing I wanted to know was: "could I get a CI server and all of its dependencies into a set of Docker containers?" It turns out not only is the answer 'yes', but most of the work has already been done for us. You just have to wire the right things together.

Why start here?

The Big Ask for hardware starts with the build environment.

Continuous Delivery didn't always exist as a term. Before that it was just a concept. You start with a repeatable build. You automate compiling the code. You automate testing the code. You set up a build server so you know if it's safe to pull down trunk/master in the morning. You start enforcing clean builds of trunk/master. You automate packaging the code. Then you automate archiving the packages. One day you wake up and realize you have a self service system where QA can pull new versions onto their test systems and from there it's a short leap to capturing configuration and doing the same thing in staging and production.

But halfway through this process, you needed to do UI testing. For web apps that means Selenium. PhantomJS is a good starting point, but there are many things that only break on Firefox, or Chrome. Running a browser in a VM without a video card takes some special knowledge that not everybody has. And when the tests break you can't always reproduce them locally. Sooner or later you need to watch the build server run the tests to get a clue why things aren't working. Nothing substitutes for pixels. Saucelabs can solve this for you but we're trying to start small.

The Plan

Most of what you need is out there, we just have to stitch it together. The Jenkins team maintains Docker images.SeleniumHQ has their own as well, that can run Firefox and Chrome in a headless environment. They also have 'debug' builds with support VNC connections, which we'll be using. What we need is a Fig script to connect them to each other, and the Jenkins slaves need our development toolchain.

We need:

1. A Jenkins instance

2. A Selenium Grid (hub) to dole out browsers

3. Selenium 'nodes' which can run browsers

4. A Jenkins slave that can see the Selenium Grid

5. SSH Certs on the slave so that Jenkins can talk to it


Rather than modifying the Jenkins image, I opted to build a custom Jenkins Slave. Personally, I prefer not to run slaves on the Jenkins box. First, the hardware budget for the two is very different. Slaves are IO, memory, and CPU bound. The filesystem can be deleted between builds with few repurcussions. The Jenkins server is a different beast. It needs to be backed up, it uses a lot disk space for artifacts (build statistics and test reports, even if you store your binaries in a system of record), and it needs some bandwidth. There are many ways for a bad build to take out the entire server, and I would rather not even have to worry about it.

Also it's probable you already have a Jenkins server, and it's easy enough to tweak this demo code to use it with your existing server without impacting your current operations.

Fig to the rescue

Fig is a great Docker tool for wiring up a bunch of services to each other. Since I know a lot of people who like to poke at the build environment, I opted to write a Fig file where all of the ports are wired to fixed port numbers on the host operating system.

You'll need to install Fig of course (it's not part of the Docker install, or at least not yet), and you'll need to create a ~/jenkins_home directory which will contain all of the configuration for Jenkins, you'll need to generate an SSH key for Jenkins, and copy it into authorized_keys for the slave (see the [] if you need help with that step). Then you can just type in two magic little words:

fig up

And after a few minutes of downloading and building images, You'll have a Jenkins environment running in a box.

You'll have the following running (substitute if you're running boot2docker)

1. Jenkins on

2. A Jenkins slave listening for SSH connections on

3. A virtual desktop running Firefox tests listening on

4. A virtual desktop running Chrome tests listening on

5. Selenium hub listening on port 4444 (behaving similarly to selenium-standalone)

Further Improvements

If that's not already cool enough for you, there are some more steps I'll leave as an exercise for the reader.

Go smaller: Single node

On small projects, it's not uncommon to run the Integration Tests sequentially. A single browser open at a time, to avoid any concurrent modification issues resulting in false build failures.

I did an experiment where I took the SeleniumHQ chrome debug image, dropped firefox on it as well, and changed the configuration to offer both browsers. I run this version in [compact.yml] instead of the two run in the normal example. This means only one copy of X11 and xvfb is running, and you only need one VNC session to see everything. The trouble with this is ongoing maintenance. I've done my best to create the minimum configuration possible, but it's always a possibility that a new SeleniumHQ release won't be compatible. For this reason I'd say this should only be used for Phase 1 of a project, and should be a priority to eliminate this custom image ASAP.

fig --file=compact.yml build

fig --file=compact.yml up

This version of the system peaked at a little under 4 GB of RAM. With developer grade machines frequently having 16GB of RAM or more this becomes something you could actually run on someone's desktop for a while. Or you could split it and run it on 2 machines.

Go bigger: Parallel tests

One of the big reasons people run Selenium Grid is to run tests in parallel. One cool thing you can do with Fig is tell it "I want you to run 4 copies of this image" by using the fig scale command, and it will spool them up. The tradeoff is that at present it doesn't have a way to deal with fixed port numbers (there's no support for port ranges) so you have to take out the port mappings (eg: "5950:5900" becomes "5900"). The consequence is that every time you restart Fig, the ports tend to change. But watching a parallel test run over VNC would be challenging to say the least, in which case you might opt to not run VNC at all. In that case you can save some resources by using the non-debug images

Examples and Further reading



Selenium HQ Docker

Jenkins images in the Docker Registry

Protractor: Using the Page Object Model

What is Protractor?

Protractor is an end-to-end (e2e) test automation framework for AngularJS application. It is an open source Node.js program built on top of WebDriverJS originally developed by a team at Google. Test cases written in Protractor run in the browser simulating the actions of a real user. An e2e test written in Protractor makes sure your application behaves as expected.

Challenge: Code Duplication

There is always duplication in test cases. For instance login, find, and logout are clearly duplicated in the following two test cases:

Test case 1: login to the website, find an item, add it to my wish list and logout.

Test case 2: login to the website, find an item, add it to cart, purchase and logout.

Duplicate test cases result in code duplication. An e2e test suite with code duplication is difficult to maintain and requires costly modifications. In this tutorial, we will implement a page object design best practice for Protractor to minimize code duplication, make tests more readable, reduce the cost of modification, and improve maintainability.  

The most important concept here is to separate the abstraction of the test object (the page) and the test script (the spec). Hence, a single test object can be used multiple times by test scripts without rewriting it.

Using the PhoneCat application

We will use the popular AngularJS PhoneCat application to demonstrate how Protractor tests could make use of the page object design pattern to create simple and maintainable e2e test automation.

A concise instruction set, on how to setup the PhoneCat application in your local machine, is at the end of this post.

Abstraction: Separation of Test Object from Test Script

The PhoneCat app has the ‘phones list view’ page where all available phones are listed. A user can search or change the order of the listed phones on the page. When selecting a phone from the list, a user navigates to the ‘phone details view’ page, where more details about the selected phone are included.

In line with the page object design pattern best practice: the PhoneCat application has two test objects, the phones list view page and the phone details view page. Each of the pages should be self-contained, meaning they should provide all the locators and functions required to interact with each page. For example, the phones list view page should have a locator for the search input box and a function to search.

The image below shows the separation of the test object (page object files) from the test script (spec files). The spec files under the spec folder contain only test scripts. The page object files under the page object folder contain page specific locators and functions.

Figure 1: Separation of page object from test specification

Test Object (Page Object)

The PhoneCat application have the phones list page and the phone details page. The following two page object files provide the locators and functions required to interact with these pages. 

Phones = {

    elements: {

        _search: function () {

            return element(by.model('query'));



        _sort: function(){

            return element(by.model('orderProp'));



        _phoneList: function(){

            return element.all(by.repeater('phone in phones'));


        _phoneNameColumn: function(){

            return  element.all(by.repeater('phone in phones').column(''));



    _phonesCount: function(){

        return this.elements._phoneList().count();


    searchFor: function(word){



    clearSearch: function(){



    _getNames: function(){

        return this.elements._phoneNameColumn().map(function(elem){

            return elem.getText();



    sortItBy: function(type){

        this.elements._sort().element(by.css('option[value="' + type + '"]')).click();


    selectFirstPhone : function(){

        element.all(by.css('.phones li a')).first().click();

        return require('./');



module.exports = Phones;

Listing 1:

PhoneDetails = {


        _name: function(){

            return element(by.binding(''));


        _image: function(){

            return element(by.css(''));


        _thumbnail: function(index){

            return element(by.css('.phone-thumbs li:nth-child(' + index +') img'));



    _getName: function(){

        return this.elements._name().getText();


    _getImage: function(){

        return this.elements._image().getAttribute('src');


    clickThumbnail: function(index){




module.exports = PhoneDetails; 

Listing 2:

Test Script (Spec)

The test script can now make use of the page object files. All the functions required to interact with the page (the test object) are encapsulated in the page object and the test scripts are more readable and concise.

describe('Phone list view', function(){

    var phones = require('../page_objects/');

    beforeEach(function() {



    it('should filter the phone list as a user types into the search box', function() {








    it('should be possible to control phone order via the drop down select box', function() {


        phones.searchFor('tablet'); //let's narrow the dataset to make the test assertions shorter


            "Motorola XOOM\u2122 with Wi-Fi",

            "MOTOROLA XOOM\u2122"




            "MOTOROLA XOOM\u2122",

            "Motorola XOOM\u2122 with Wi-Fi"



    it('should render phone specific links', function() {




        browser.getLocationAbsUrl().then(function(url) {





Listing 3: phones.spec.js

describe('Phone detail view', function(){

    var phones = require('../page_objects/'),


    beforeEach(function() {



        phoneDetails = phones.selectFirstPhone();


    it('should display nexus-s page', function() {

        expect(phoneDetails._getName()).toBe('Nexus S');


    it('should display the first phone image as the main phone image', function() {



    it('should swap main image if a thumbnail image is clicked on', function() {







Listing 4: phone.details.spec.js

In conclusion, when a page object design pattern is properly used in Protractor test automation, it will make an e2e test easy to maintain and reduce code duplication.


GitHub Repo: For This Tutorial

The following gitHub link for the PhoneCat tutorial and adopt it to this tutorial. It is basically a sample Protractor test (scenarios.js) of the PhoneCat app rewritten in a page object model.

This could be a good starting point for discussion on the application of the page object model to improve the maintainability of Protractor tests.


The following table shows the main use of the page object model, which is minimizing code duplication. The table compares a Protractor test included in the PhoneCat (scenario.js) and a Protractor test (,, phones.spec.js and phone.details.spec.js) which implements the same test case with the page object model. As shown in the table, even in this simple test, code duplication is enormous when implemented without the page object model. In contrast, code duplication when implemented with the page object model is very minimal.


Table 1: Comparison of code duplication with and with out page object model



PhoneCat app: the Setup

1.     Install Git and Node.js.

2.     Clone the angular-phonecat repository. ($ git clone --depth=14

3.     Change your current directory to angular-phonecat ($ cd angular-phonecat). Download the tool dependencies by running ($ npm install).

4.     Use npm helper scripts to start a local development web-server($ npm start). This will create a local webserver in your machine, listening to port 8000. Browse the application at http://localhost:8000/app/index.html

5.     To install the drivers needed by Protractor ($ npm run update-webdriver) and to run the Protractor end to end tests ($ npm run protractor).

Refer to the AngularJS site for complete instructions.

Final note: If you want to try the code samples given in this tutorial,  besides creating folders, the page object files, and the spec files, you need to change the path to the the new spec files in protractor-cof.js file. Simply change spec: [‘e2e/*.js’] to spec:[‘e2e/spec/*.spec.js’] or to a path where you put the spec files.

Related Works

1.     Using Page Objects to Organize Tests

2.     Using Page Objects to Overcome Protractor's Shortcomings

3.     Getting Started with Protractor and Page Objects for AngularJS E2E Testing




Three Ways To Share Code

There are three primary ways to collaboratively share code.

  • As source.
  • As a service.
  • As a library.

These aren't mutually exclusive, but represent the main deployment strategy. For example, you might offer a library (e.g. via a Maven or NuGet repository), but also make the source available.

Source is fine, but you really only want a focused team of 5-7 working closely to manage the incoming commits. Otherwise, the code suffers from the tragedy of the commons - your tests, code coverage, and overall quality will suffer. Source sharing also makes dependency management hard - "did you build the code from this morning at 9:38am or 9:52am"?

Running a service (e.g. REST/JSON) is actually really hard because of the dependency management issues. "Well, we'd like to update the staging server services, but that will break three other teams." Interestingly, the main reasons for running a service are data management & security, not code sharing. It's possible, but you really, really need to think about service version management.

Sharing code as a library, using a proper repository manager with dependency management is by far the easiest strategy. Set up a 1-click deployment with a CI tool, and you're off to the races.

As a simple set of guidelines:

  • 5-7 people with direct commit per source repository, max.
  • All other incoming code should be submitted to that team as patches/pull requests.
  • If the security and/or data are the key value of the code, it's a service.
  • Otherwise, if possible, publish your code as a library to an appropriate binary repository (public or private as needed)

Jasmine 2.0 Matchers, with AngularJS

One of the breaking changes with Jasmine 2.0 was a change to how Matchers are written. Using Jasmine with AngularJS introduces another set of limitations that I will cover in due course.

Why Matchers?

A matcher lets you extract repeated code around your 'assert' and 'equals' methods and reuse them across all of your tests. In addition to removing potential bugs in your tests (debug once, reuse everywhere), they can also provide more detailed text for failed tests than you get from the built-in test methods.

Code Reuse

We spend a lot of time teaching people not to repeat themselves when writing production code, and people are naturally averse to breaking all of these rules when they write tests. In reaction people have invented a lot of solutions to this 'problem', some of them are good and a lot of them are counterproductive; they help you write the initial set of tests but make it hard to keep them working over time. See the endless "DRY vs DAMP" debates that rage seemingly forever on the internet.

The authors of Testing frameworks recognize this problem, and most frameworks have provided a generous set of tools for eliminating these issues. Unfortunately they are often misused, or aren't used at all. Matchers are a crucial but often overlooked tool in this toolbox.


Expected undefined to be 'true'.

How many times have you seen this dreaded message? What does that even mean? It might as well say Test failed., which is exactly what the line preceding the error said, so it provides no extra information whatsoever.

A Matcher gives you an opportunity to provide a detailed failure message, providing debugging information to the user when it is the most useful. Often it can steer them to a solution without ever having to use the debugger.

Expected <input name="foo" type="checkbox"></input> to be checked.

Doesn't that tell you so much more about what's wrong?

How do Matchers work?

In most test frameworks a Matcher provides two answers for every call. Whether the test case passed, and what error to display if it didn't. The framework watches for the failure and handles the bookkeeping to determine which tests passed which didn't, and where they failed if they didn't.

However, Jasmine goes one step farther in 2.0. In a bid to remove a lot of nearly duplicate Matchers, they introduced the .not operator that inverts the result of the test. Now instead of needing a toBeNull() and toNotBeNull() matcher, I just need toBeNull() and if I want the opposite of that I use not.toBeNull().

Unfortunately this requires a different structure for the Matcher functions, which isn't backward compatible, and may look a little odd if you don't understand all of this background I've shared with you.

toBeHidden: function () {

    return {

        compare: function (actual) {

            var expected = 'ng-hide';

            var pass = actual.hasClass(expected);

            var toHave = pass ? "not to have" : "to have";


            return {

                pass: pass,

                message: "Expected '" + angular.mock.dump(actual) + "' " + toHave + " a class '" + expected + "'."





What's going on in here?

At the innermost point of this code we're testing a DOM element for visibility, assuming it is using the ng-hide directive to conditionally display a piece of UI. The rest of the code seems to be about putting together an error message.

In Jasmine as each test runs, it generates a status and a message. If the status is 'false' then the message appears in the test results, otherwise it is swallowed. Unless the .not operator was used, in which case if the status is 'true' then the message appears.

So what we do is we check assertion, if it returns false we generate an error explaining that we wanted the condition to be true. If it returns true we generate an error that explains that we did not want this condition to be true.

The last bit, and perhaps the most important, is the debugging output in the response message. angular.mock.dump(actual) turns a cryptic error with very little useful content into a message that contains the object under test. In this case it's a DOM element, and so the user will have a much better idea of what's broken and can hone in on the solution more quickly.

Loading Matchers

The Problem

In the old days with Jasmine there were many ways to get your Matchers loaded. You could just poke them into one of the data structures. However Angular does some monkey patching of Jasmine and several of the old strategies no longer work. In the official Jasmine documentation they recommend loading the matchers at the top of every test suite. While this is compatible with Angular's strategy of reloading Jasmine over and over again, the tendency toward many small modules with small test suites and that boilerplate can get a bit crazy.

The Solution

Some clever people figured out that a naked beforeEach (outside of a describe()) works just fine in Angular.js.


 * matchers.js



beforeEach(function () {


        toContainText: function () {



        toHaveClass: function () {



        toBeHidden: function () {





If you load the matchers before the first test file, then this block will load before every test, and you're good.Jasmine is so fast that running this bit of boilerplate before every test hardly impacts your test speed. On my last project our unit tests averaged 15 milliseconds per test (1400 in 22 seconds), which is well within the range of the definition of 'fast unit tests'.

I've provided a matchers.js file for you, containing this setup pattern along with a few of my favorite Angular-compatible matchers.

Designing a good matcher

One of the tenets of Testing is that unit tests should test (or assert) exactly one thing. This means you set up a scenario, and then prove that one single, specific aspect of that scenario holds true. In real code it's common for a single scenario to have a number of consequences, and if you want one assertion per test that means you're going to be repeating a lot of effort and code.

The setup and teardown (beforeEach, afterEach) methods remove the Lion's Share of boilerplate from your tests, and in Jasmine they can go even farther because you can organize partly related tests with nested setup methods, removing far more duplication from your boilerplate.

But after the setup and before the teardown there is often a smaller but far more important tangle of repetitive code that sets up the conditions for the assertion. Some people write their own custom helper functions to deal with this, but a Matcher is usually the correct solution to this problem.

Pick something to match

As with refactoring normal code, your goal is to end up with a set of short and sweet functions with descriptive names and straightforward internals.

Things to consider for a Matcher:

1.Do I have lots of scenarios that lead to the same outcome?

2.Do I use the same Object in many places to report an outcome?

3.Do I have lots of objects that behave similarly?

The last one requires some caution. 'Similar' code often indicates a missing level of refactoring is needed. Trying to create a matcher prior to doing this work may actually complicate the rework. It's a matter of when you need coverage on the code and which sources of pain you can avoid.

In an Angular app the conceptual space is pretty small, so this work can be pretty obvious in many situations. You have lots of code that deals with JSON responses, lots of code that works with DOM elements, and both can benefit from having Matchers that test attributes, presence of children, String comparisons (loose and strict), etc.

Reporting is Key

The big rule for any Matcher is that you have to clearly state what the problem is in a failure condition. Remember that we write tests largely to help keep people from accidentally breaking our code later in the project. During that delicate time where they're trying to write a new feature, a good error message can often tell the person what they broke without them having to context switch to look at your test.

And in the case of a bug, remember that the person fixing it may already be frustrated with the situation before work starts, don't pile onto that frustration with cryptic or subtly misleading test failure messages. Be kind. The sanity you save may be your own.

There is a short list of things I always put into my matcher messages:

1.I should know which matcher failed by the message. Make each one unique.

2.The actual and expected values must appear.

3.The actual should always appear before the expected. Common convention avoids confusion.

4.Using a dump() method to report the entire 'actual' object is wordy, but may save you from starting the debugger.

5.The values should be bracketed in some way so whitespace errors are obvious.

Bracketing turns this:

Expected  foo to be foo

into this:

Expected ' foo' to be 'foo'

You may notice the error in the first message immediately, or you might not. If you don't, you'll feel pretty stupid later on. But is that extra whitespace an error in the code, or did I get the string concatenation wrong in my Matcher and that extra whitespace is a red herring? The latter message makes it pretty dead obvious what happened, and takes only a couple seconds longer to write.

Always double-check your work

Rule #1 of Matchers: Any time you change a matcher, force some tests to fail to verify that the error makes sense.

It's easy to get the boolean logic wrong and have a set of tests that fail silently. It's easy to invert the meaning of the error message and not notice. It's really pretty easy to check:


Just try both of these negative tests:

// Check the error message



// Check the equality test

expect(answer).toEqual('something else');

And maybe throw in a null check, and you've got a pretty good idea that your matcher won't fail on you later.

Good luck!       

Dev9 Coffee Talk: the Psychological Benefits of Continuous Delivery

Welcome back to Dev9 Coffee Talks! This interview was with Jason Marshall and Keith Bloomfield, both of whom are developers at Dev9. They will be sitting down with us a few times over the next few weeks to talk about a number of different topics.

I started off this time by asking what the deployment process is like in a traditional environment, and how Continuous Delivery methodology is different. Jason jumped in right away. “Traditionally, developer teams write the code, QA checks it off, and Ops deploys it. It is a slow process, even when it done well.” Jason said. “More often than not it’s painful too. Particularly if you are dealing with preexisting customers and live deployments.”

“There is just so much that can go wrong with deployments" he continued. “When you are changing the shape of the data, changing schema, and trying to preserve all of the old data there are a lot of points where a product can fail. That’s why deployment time is so stressful for developers. Deployment day means pagers, all-nighters, and scheduled downtime. That’s where the image of developers hunching over glowing monitors in the dark with soda and cold pizza came from.”

Keith was the first one to stop laughing. “When large deployments are done all at once, it is extremely taxing.” He said. “The late nights mean you and your team get sleep deprived, and when you are sleep deprived you make mistakes. You attention to detail gets worse, and you want to get home so bad you just focus on the obvious problems. Then all the non-obvious problems slip through the cracks and crop up later.”

“Once all the red lights go off, victory is declared.” Jason chimed in. “Then, just as you pull into your driveway at home, the pager goes off and someone tells you everything is broken.”

“Continuous Delivery is about getting rid off all of the pain. Really, you could say that the goal is to make the deployment as boring as possible.” Jason went on. “If you are fresh when you hit the deploy button, you can figure out any problem. The best part, is that if you have set things up so that you are fresh when you hit that button, so many other things need to have been done right, that you probably don’t even need to be fresh.”

Stay tuned, we'll have more from Jason and Keith soon.

Intro to Go for Java Developers

Unless you've been living under a rock, or deep in crunch mode for several years, you've likely heard of Go (AKA golang), Google's new-ish language. It was designed as an alternative to the growing complexity of C++, especially around concurrency. It's also attracting droves of Python developers, as it offers dramatically better performance, all the fun of type safety, and a syntax that's more comfortable than Java or C#.

But I like Java just fine

However, for us Java (and C#) developers, we're told every new language is the one that will save us from ourselves. Let's take a quick tour of Go and see what it offers.

To this end, I won't bore you with explaining the basics of programming. I will show you the key differences with Java, and why you might consider Go for your next project.


For all of the examples listed in this article, you'll see a link next to 'Play this' -- this refers to the Golang Playground. This is a quick and easy way to test out the language without installing anything.

Hello World

Of course, before we get started, here is the canonical 'Hello World' for Go:

package main

import "fmt"

func main() {
    fmt.Println("Hello, world!")

Play this

This syntax is familiar to most developers in C-style languages.

Is it Object-Oriented? Functional? Procedural?

Go has constructs from all of these schools of thought, but with some modern best practices built in. For example, we've all heard these mantras before:

For this reason, Go has made some interesting choices. First off, it has no concept of "Objects" -- a single abstraction that represents both state and behavior. It just has the idea of Types -- in C-like structs:

type Address struct {
    Number string
    Street string
    City   string
    State  string
    Zip    string

Notice also that the types follow the declaration, and upper-cased letters are used to start identifiers.

So, this would almost seem like a purely procedural language. If you've used Scala or C#, however, you're probably familiar with the idea of Extension Methods. This is also possible in JavaScript (by modifying the object prototype), Groovy (by manipulating the metaclass), and Ruby (monkey-patching). Instead of having those as a separate concept, Go makes those the only way to define behavior for a type:

package main

import "fmt"

type Address struct {
    Number string
    Street string
    City   string
    State  string
    Zip    string

func (a Address) Location() {
    fmt.Println("I’m at", a.Number, a.Street, a.City, a.State, a.Zip)

func main() {
    address := Address{Number: "137", Street: "Park Lane", City: "Kirkland", State: "WA", Zip: "98033"}

Play this

Notice some more neat things here. We have named constructor parameters. We did not provide a type to the variable 'address'. The pattern := tells the Go compiler to infer the type. And, the Location() function was automatically bound as a method on the Address type.

So, what would inheritance look like in this world? Let's create a MultiFamilyAddress:

type MultiFamilyAddress struct {
    Address Address
    Unit string

This is a perfect example of composition-over-inheritance but in Go. Now if we want to call the Location method, we have to do it like so:

func main() {
    address := Address {Number: "137", Street: "Park Lane", City: "Kirkland", State: "WA", Zip: "98033"}
    multi := MultiFamilyAddress {Address: address, Unit: "200"}

Play this

Of course, we can always define a method with the signature func (m MultiFamilyAddress) Location() if we wanted to avoid this indirection. This isn't really inheritance the way we think of it. To do field-based inheritance, we use a construct Go calls anonymous fields:

type MultiFamilyAddress struct {
    Unit string

Not much different, right? This is Go's way of including all the fields of Address as though they were local fields on MultiFamilyAddress. This means the instantiation of MultiFamilyAddress will now look like this:

multi := MultiFamilyAddress{Address{Number: "137", Street: "Park Lane", City: "Kirkland", State: "WA", Zip: "98033"}, "200"}

Play this

Go also offers interfaces, but they are a bit different than your normal OO interfaces. We'll cover those in another article.

So we've seen the procedural and object-oriented methodologies, but what about functional? A key component of functional programming Higher-order Functions. In Java, as of version 8, we can do something like this:

List<String> strings = Arrays.asList("Hello", "World");
strings.foreach(n -> System.out.println(n));

Of course, in Java 7 or before, it would be more like this:

List<String> strings = Arrays.asList("Hello", "World");
for ( String str : strings )

In Go, it would look something like this:

func main() {
    strings := [...]string{"Hello", "World"}
    for _, item := range strings {

Play this

Some interesting things here. First, to declare an array, we put that at the beginning of the variable definition. We used [...] in indicate the compiler should figure out the actual size. We could have easily made it [2]string{"Hello", "World"}.

The for loop is where it gets interesting. First, you see we are taking 2 parameters back, one indicated with an _ character. This is a convention in Go (and some other languages) for a parameter we don't care about. In this case, it's the index position of the element. The range operator takes a []T type, and executes the code inside the curly braces on each item.

Of course, this wasn't clearly a higher-order function, nor did it involve closures. Let's take a look at a simple example that does this:

func main() {
    x := 5
    fn := func() {
        fmt.Println("x is", x)

Play this

This prints, as you might expect:

x is 5
x is 6

So we have functions as data types. This lets us do some interesting things:

package main

import (

type calcOp func(int, int) int

func main() {
    // You seed your RNGs, right?

    fns := []calcOp{
        func(x, y int) int { return x + y },
        func(x, y int) int { return x - y },
        func(x, y int) int { return x * y },
        func(x, y int) int { return x / y },
        func(x, y int) int { return x % y },

    fn := fns[rand.Intn(len(fns))]

    x, y := 171, 35
    fmt.Println(fn(x, y))

Play this

So what's going on here? First, we've defined a type called calcOp -- a calculator operation. It is a function that takes 2 integers, and returns an integer. This is now a defined type we can use in argument lists and objects.

In the main method, we create a collection of these objects. However, since we have ommitted a size, it's not an array. In Go parlance, this is called a Slice.

We instantiate this collection of calcOp functions. We pick one at random. We initialize x and y with 171 and 35 respectively (that multi-assign syntax is also a feature of Go), then execute the function with those values. Neat!

Concurrency Constructs

So now we've seen that Go encapsulates many existing programming schools, but if you're a fan of one of those in particular, there is almost certainly a better language for it. Haskell and OCaml for functional, Clojure and Ruby for OO, and C and Rust for procedural. One of the key selling points, and I cringe while typing this out, is that Go is meant for the cloud. Not only do we parallelize and distribute our applications, we need to parallelize our code as well. This has been a major source of both performance issues, and correctness issues.

To that end, Go has two constructs that are going to help us: goroutines and channels. Goroutines are a lot like actors (in the Akka Actor sense) -- basically multiple threads without necessarily having a 1-to-1 correlation to system threads. When one blocks, another takes over. Channels are a way to separate computation and provide a clean interface to talk between them. Let's take a look at what they do:

package main

import (

func Announce(message string, delay time.Duration) {
    go func() {

func main() {
    for i := 0; i < 20; i++ {
        dur := time.Duration(rand.Int31n(10)) * time.Millisecond
        Announce("Item " + strconv.Itoa(i), dur)


Play this

The main method is just a bunch of setup -- defining dur to be a small duration of time (up to 10 milliseconds), and printing a value to the console 20 times. If you ran this program as-is, what would you expect to see? A bunch of random-ordered "Item X" messages, followed by a 'Done!' message? Here's what you actually get:

Program exited.

Wait, what? Let's look at that Announce function again. It is called with go func() -- this is how you invoke a goroutine. I am oversimplifying, but think of goroutines as backgrounded processes on the shell. Or, if you really know your threading model in Java, they are daemon threads. That is, they do not hold up program execution. When the main thread dies, they die as well. In Go, a goroutine will execute if the program is still running. We didn't get anything on the console because the program didn't run long enough. Let's add this line right before the 'Done!' line in the main function:

time.Sleep(time.Duration(5 * time.Second))

Play this

This tells our main thread to pause for 5 seconds, then we can continue and finish. With this model, we get our expected output:

Item 18
Item 15
Item 9
Item 5
Item 6
Item 17

So, that's goroutines. They're like background processes. The obvious question here is -- how do I make sure they execute? That is, you want to (potentially) offload the work to another thread or process, but it's important that it finishes. This is where Channels come in.

In Go, Channels -- blatantly taken from the link -- are "the pipes that connect concurrent goroutines. You can send values into channels from one goroutine and receive those values into another goroutine."

Call this IPC or eventing or what have you. It is a basic construct of communicating between goroutines. So, what does a channel look like? To make a channel, we use the Go builtin make. It makes a variable for you, and it's how you make channels:

mychan := make(chan string)

chan is the identifier for a channel. The string identifier says it's a channel of strings. That is, it takes and emits strings. The simplest way to emit and receive messages is this:

go func() { mychan <- "ping" }()
msg := <-mychan

Play this

We are using a goroutine lambda to emit a message to the channel mychan, and then receiving it into msg.

So, how would we apply this to the example above? We know we can send a message to a channel, and we know we can receive messages. Additionally, receiving a message is a blocking operation -- the execution stops until a message is available. We could go really naive with it:

func Announce(message string, delay time.Duration) {
    mychan := make(chan bool)

    go func() {
        mychan <- true


Play this

In this example, we receive from mychan after the execution of func finishes. This has one rather predictable side effect: all lines are printed in order. Because receiving a message is a blocking operation, we don't return control to the for loop until we have received a message. Now, what if we want to keep the parallelism? Here's how I solved this one:

package main

import (

func Announce(message string, delay time.Duration, done chan bool) {
    go func() {
        done <- true

func main() {
    numMessages := 20

    channels := make([]chan bool, numMessages)

    for i := 0; i < numMessages; i++ {
        channels[i] = make(chan bool)
        dur := time.Duration(rand.Int31n(10)) * time.Millisecond
        Announce("Item "+strconv.Itoa(i), dur, channels[i])

    for i := 0; i < numMessages; i++ {


Play this

Here, we use the make function again to create an array of channels, one for each message. Then, inside the loop, we create a channel and stick it in the array. We then pass that channel to the Announce function. The goroutine inside that function signals the channel when it has executed. Because we don't query the channels until afterwards, this allows the random-order execution we're looking for. To finish it up, we drain the array of channels.

There are other problems with this solution -- what if we don't know the number of channels we want, what if the number is too large to reasonably store in memory? These will be left as an exercise for the reader.

Last Little Bits

So we've seen some neat concurrency concepts, as well as how to structure types and methods.

First, if you don't want to use the := syntax, you can declare a variable with a type:

var myint int = 5

This is not too useful for our examples. You can also declare constants:

const foo = "This is a constant"

We saw above that you can return multiple values from a function. You can do that yourself:

func mutireturn() (int, string) {
    return 42, "foo"
var x, str = multireturn()

We didn't show a pure example of higher-order functions in the functional section, so here's two of those:

func adder() func(int) int {
    sum := 0
    return func(x int) int {
        sum += x // sum is declared outside, but still visible
        return sum

func sum(i int) func(int) int {
    sum := i
    return func(x int) int {
        sum += x
        return sum

func main() {
    add := adder()

    add2 := sum(2)

Play this

This gives us the output:


And one last bit. Go has a defined structure to the code. There is only one correct way to format your Go programs. It's so important, that there is a go format command to put your code in the correct style, and it's not configurable. Holy wars have been started over the correct way to align braces, spaces, and brackets in C-style languages. Go picked one and built it in. When you have one less thing to worry about, you can focus on more important concerns.

Final Thoughts

Go is quite a fun language to work with. It has a lot of the power of C/C++ (including pointers), but cuts out a lot of cruft. It can be run either as a pre-compiled unit, or you can run a single file on the command line with go run myprogram.go. This makes it serve dual purpose of compiled and interpreted software. This makes it just as appropriate for high-performance, long-running software as it does for advanced shell scripting. Happy programming!

Continuous Delivery Tool Recommendation for the Java Stack

There are eight essential components of a Continuous Delivery setup.

1.    Source Control
2.    Build Tool
3.    Automated Tests
4.    Continuous Integration (CI) Server
5.    Binary Repository
6.    Configuration Management
7.    Automated Deployment
8.    Monitoring and Analytics

An issue management system could also be argued for, but it is more of a project management concern.

Source Control:

For this, I recommend Git unequivocally. Stash, which is like GitHub but behind a firewall, is also an effective tool. They allow for pull based workflows to enforce code reviews and knowledge sharing. Git is a fantastic tool.

Build Tool

I have to recommend Maven for this. While some may object to its verbose xml syntax, it’s very well supported by all the major Java IDEs. In addition, nearly every CI tool offers native Maven support. Maven also deals with dependency management, which could be its own category if they were not bundled so easily here. Gradle is another great alternative, but the ability to put code into your build scripts is a bit scary. It can be great if you have a disciplined team, but could lead to non-repeatable builds. Additionally, the more heavy the customization you put in, the less your tooling chain can help you.

Automated Tests

For commit tests, there are really two good choices. jUnit and TestNG. Either of them works. Nearly every java developer should be familiar with jUnit. TestNG offers some more advanced tooling and arguably better runtime behavior. Nobody will get fired for using jUnit, but TestNG is a bit better if you are staring a greenfield project.
For mocking/stubbing, I like to use Mockito. It is pretty unrivaled in ease of use.

For fluent assertions, I like AsserJ. It supersedes hamcrest and FEST.

Acceptance testing can often be done with jUnit and TestNG as well. I like to use the RestAssured framework for testing REST endpoints. I also do a bit of selenium and other browser-based testing. PhantomJS is a great too to do a first pass. I like acceptance testing in a framework called Cucumber, because the test specifications follow and almost English language structure.

For performance testing, I like Gatling locally and Neustar for cloud-based testing.

CI Server

Industry standard here is Jenkins, and it works fine. It has great community support and all that comes with it. However, I prefer TeamCity. It offers a lot of powerful features like extracting templates from a build, easy automatic job creation for new branches, and many more. I also like the way it manages VCS roots a lot better. It is a commercial product past a certain size, but I think it is worth it. To get the same features out of Jenkins, you must to a bunch of configuration on a bunch of plugins from many different sources.

Binary Repository

There are only two reasonable choices here: Nexus or Artifactory. People can get into religious wars over these, but I prefer Artifactory. It can act as an NPM repository and an RPM repository. However, there is a more contentious issue. Artifactory will rewrite POM files to remove <repository> information so that you don’t leak requests. Nexus does not. That means that if somebody specifies a custom repository in a POM file, you will end up searching that one as well.

Configuration Management

There is no single tool here that stands out. I like using Typesafe Config for configuration. You still need a way to deploy it, though that is more a component of automated deployment. There is a lot of talk about distributed configuration management and configuration discovery. For that, etcd is the popular choice.

Automated Deployment

This can be a contentious issue, and I don’t have a solid opinion on it. The two primary packages are Chef and Puppet. I think either is a reasonable choice. They both work to automatically bring a system to a known state, but they take different tacks. Puppet is more declarative, and Chef is more scripted. I have worked more with Puppet, so I am more comfortable with it.

Monitoring and Analytics

For analytics, it is still hard to beat Dropwizard Metrics. A few annotations and you are on your way.

For monitoring, Zabbix seems to be a rather common tool – that everyone has some problem with. ZenOSS is nice, but is usually used in very large organizations and therefore tends to be cumbersome. It is only really appropriate if you are managing 100 or more servers. Nagios is pretty popular, but seems like it has stagnated in terms of advancements. I remember it being purely plugin-driven as well, meaning you need to know the ecosystem just to get it running.

Altogether, I still have to recommend Zabbix for most circumstances.


October Retrospective

We’re reminiscing over what a great October it has been for us. We started the month with a group of developers moving in-house to work on an exciting project for a client.

A bunch of us got together on a Saturday for a great cause. We were recognized as one of the fastest-growing private companies in the state. We had great times over food & drinks with our entire team at our quarterly All Hands... and finally, we hosted two great seminars.

Justin Graham gave his first seminar on Developing a Test Strategy earlier in october and our CTO Will Iverson led the discussion on Managing an Agile Portfolio last week.

As we look forward to November, we’d like to share the next two seminars we've lined up.

Our first seminar for November will feature Faith Cooley, who will present Organizational Design for Effective Software Development on November 6th.

According to Faith, “It is often relatively easy to solve technical problems, [but] it is harder to solve organizational problems.”

Scenarios could include teams that are functioning in less than an optimal manner – in turn, this consumes budgets, impacts a lead or manager’s ability to deliver, and leaves everyone exhausted.

She will share easily executable ideas on how to improve cross training on teams, how leads can create well-rounded actionable reviews for their employees, and will give tips on how to have corrective conversations with team members.

Finally, later in the month, Gabe Hicks will cover Continuous Delivery Maturity. Be on the lookout for more info as we get closer to that event.

We can’t wait to see how November will shape up for us. More importantly we’re hoping you will take part in sharing some of those moments with us!

Dev9 Solutions Architect Coffee Talk: on Continuous Delivery & Automation

Once in a while, we like to sit down with our SA's and pick their brains about the development space in which they operate. We decided that those conversations are much more effective with the addition of coffee, so grab a cup and enjoy this entry in our series of "Dev9 SA Coffee Talks"

The Solutions Architect chosen for this first series of coffee interviews is Gabe Hicks, a solutions architect at Dev9. Gabe has been with Dev9 since the company’s inception. Currently he is working on a project at our corporate office.

We sat down at Starbucks. Gabe ordered a cappuccino and I went with an Americano. We opened our conversation by simply discussing, rather broadly, how companies benefit from continuous delivery. He paused thoughtfully for a few moments, and took a sip of his coffee. “Continuous delivery reduces the number of obstacles that surface during the development process,” Gabe said. “It’s about automation and breaking down traditional barriers. It’s about making deployment the most important piece of the development lifecycle.”

"Removing obstacles is really the core concept behind continuous delivery," He continued. "During development every obstacle must be dealt with or circumvented. The longer this process takes, the more expensive and frustrating a project becomes." Continuous Delivery establishes processes and practices that help to prevent some problems from occurring at all, and allows for quick identification and resolution of those that do occur.

As you would expect, our discussion inevitably shifted to the topic of automation.

“Developers love automation,” Gabe said matter-of-factly. ”It removes their fear of deployments. They know their code has been tested, and that if something isn’t right, they have the ability to react and redeploy quickly. Products don’t fail in the eleventh hour, and you get to produce good work in a manner that lets you go home and not be a ball of unhappiness.” We laughed.

Automated testing allows for developers to produce more code, with better testing coverage, than manual testing could ever allow for. This means that there are fewer bugs that make it into the build, and everyone loves that. Coupling automated deployment processes with automated testing allows for rapid development and deployment while minimizing downtime.

“Automation has not always been encouraged,” said Gabe as we headed out of 'our' Starbucks. “When I first started (developing), no one asked you to do any automation. Continuous delivery says to automate at every level, all the way through. It produces much higher quality (code).”

Keep an eye out here on our blog for further CD Interview transcripts!