Dev9 Partners With Broadleaf to Develop Custom E-Commerce Solutions

KIRKLAND, WA--(Marketwired - September 15, 2016)Dev9, a Kirkland, Washington-based custom software development company, is pleased to announce a technology partnership with Broadleaf Commerce, a provider of B2B and B2C e-commerce platform solutions for complex, multi-channel commerce and digital experience management.

"We've worked with many different organizations at Dev9 and have seen a common problem: businesses are often forced to tailor their e-commerce processes around their software. It shouldn't have to be that way," Mike Ensor, Practice Director of Digital Transformation Services at Dev9 said. "By leveraging Broadleaf's platform, we're able to work with our clients to create an e-commerce solution that's just right for their business -- and one that will scale easily as they continue to evolve."

After an extensive analysis of e-commerce platform options, Dev9 has selected Broadleaf as a technology partner. Dev9 found that Broadleaf's feature-rich platform best fits the company's core principles of predictive, transparent and lean development, enabling flexibility and scalability. Broadleaf's solutions allow Dev9 engineers to maintain Continuous Delivery best practices while designing an end-to-end, custom e-commerce system that exactly fits the client's needs.

"Broadleaf's core philosophies are closely aligned with Dev9's. We're committed to providing tailored, lightweight solutions designed for continuous innovation," stated Brad Buhl, COO at Broadleaf Commerce. "For complex enterprise commerce systems, Dev9's focus on Continuous Delivery is a perfect fit. Iterative implementations, automated testing, continuous integration, and automated deployments provide businesses with platform stability, while lowering the cost and risk associated with monolithic projects."

Dev9 has deep experience architecting and developing e-commerce systems for enterprises. This partnership with Broadleaf will expedite development and support Dev9 in its promise to deliver superior custom software solutions for clients.

About Dev9

Dev9 is a custom software development firm focused on Java and JavaScript technologies. We build software solutions based on Continuous Delivery -- a set of processes and tools that leverages a combination of transparent, predictable and lean principles deployed with a heavy emphasis on automation. Typical projects are web services at scale (e.g. Spring Boot), including integration with SQL, NoSQL and other enterprise systems. We also build client-side applications, integrating platforms such as AngularJS, Android and iOS. Developer-to-Operations and DevOps implementations often feature container strategy development (e.g. Docker).

About Broadleaf

Broadleaf Commerce provides B2B and B2C e-commerce platform solutions to simplify the complexities of multi-channel commerce and digital experience management. As the market-leading choice for enterprise organizations requiring tailored, highly scalable commerce systems, Broadleaf is fully customizable and extensible. Trusted by Fortune 500 corporations, yet priced for the mid-market, Broadleaf provides the framework for leading brands, including Google, The Container Store, O'Reilly Auto Parts, and Vology.

For more information, visit

Dev9 Named to Inc. Magazine's 2016 "Inc. 5000" List of Fastest-Growing Private Companies

Dev9 Boasts Three-Year Growth of 184% Earning Inclusion in Inc.'s 35th Annual Exclusive List of America's Fastest-Growing Private Companies

KIRKLAND, WA--(Marketwired - August 18, 2016)Dev9, a Kirkland, Washington-based custom software development firm, is pleased to announce that Inc. Magazine ranked the company number 2024 in its 35th annual Inc. 5000 list, the exclusive ranking of the nation's fastest-growing private companies. The list represents a unique look at the most successful companies within the American economy's most dynamic segment -- its independent small businesses. Companies such as Microsoft, Dell, Domino's Pizza, Pandora, Timberland, LinkedIn, Yelp, Zillow and many other well-known names gained their first national exposure as honorees of the Inc. 5000.

"Dev9 is proud to be included in this year's Inc. 5000 list," Matt Munson, Dev9 COO and co-founder, said. "Since our founding in 2010, we have seen steady growth thanks to our amazing team of talented software engineers and their commitment to teamwork and excellence. We also owe thanks to our clients who have put their faith in Dev9 to implement the latest in software technology, and enabled us to deliver the great results that have made our success possible."

The 2016 Inc. 5000 is the most competitive crop in the list's history. The average company on the list achieved a mind-boggling three-year growth of 433%. The Inc. 5000's aggregate revenue is $200 billion, and the companies on the list collectively generated 640,000 jobs over the past three years, or about 8% of all jobs created in the entire economy during that period.

"The Inc. 5000 list stands out where it really counts," says Inc. President and Editor-In-Chief Eric Schurenberg. "It honors real achievement by a founder or a team of them. No one makes the Inc. 5000 without building something great -- usually from scratch. That's one of the hardest things to do in business, as every company founder knows. But without it, free enterprise fails."

Complete results of the Inc. 5000, including company profiles and an interactive database that can be sorted by industry, region and other criteria, can be found

About Dev9

Dev9 is a custom software development firm focused on Java and JavaScript technologies. We build custom software solutions based on Continuous Delivery -- a set of processes and tools that leverages a combination of Lean principles and the heavy use of automation. Typical projects are web services at scale (e.g. Spring Boot), including integration with SQL, NoSQL and other enterprise systems. We also build client-side applications, integrating platforms such as AngularJS, Android and iOS. Developer-to-Operations and DevOps implementations often feature container strategy development (e.g. Docker).

Introduction to Kong API Gateway


Kong is an open source API Gateway that sits in front of you RESTful API. You can extend Kong using plugins. Out of the box you manage and configure Kong using a RESTful API. This can be a bit cumbersome but there are 3rd party frontends which can help make managing Kong a little bit easier.

In this introduction to Kong we will explain what an API Gateway is, download and setup a Kong Docker container, and install the Key Authentication plugin to secure the API for our consumers.

We will be using the Open Weather Map free tier API for this demonstration.

What is an API Gateway?

Put simply, an API gateway is a filter that sits in front of your RESTful API. This gateway can be hosted by you or a third party. Typically, the gateway will provide one or more of the following:

  • Access control- only allow authenticated and authorized traffic
  • Rate limiting - restrict how much traffic is sent to your API
  • Analytics, metrics and logging – track how your API is used
  • Security filtering – make sure the incoming traffic is not an attack
  • Redirection – send traffic to a different endpoint

Here is a simple diagram showing a typical workflow using Kong:

Client’s make requests to your API by going through Kong first. Kong will proxy your requests to the final API and will execute all plugins that you have setup. For example, if you have the rate limit plugin installed Kong will check and make sure the request doesn’t exceed the specified limits before calling your API.


  • Docker – make sure you have Docker installed and have some idea of how to use it.
  • Open Weather Map – We will be using the Open Weather Maps free tier API for this demo. You will need to sign up for a free account in order to get an API Key needed to call their services:

Nice to have

  • Kong Dashboard – a UI tool for managing your Kong Gateway. This demo will show you how to use the Kong API for creating your API, consumers and installing plugins. However, this UI does make managing Kong a bit easier.
  • Kitematic for Docker – Makes it a bit easier to run Docker containers in a simple UI:


Kong has a nice write up on how to use Kong in Docker: This is just a summary of what is on this page.


For this example, I setup the Cassandra container:

$ docker run -d --name kong-database \
-p 9042:9042 \

Next start Kong:

$ docker run -d --name kong \
--link kong-database:kong-database \
-e "DATABASE=cassandra" \
-p 8000:8000 \
-p 8443:8443 \
-p 8001:8001 \
-p 7946:7946 \
-p 7946:7946/udp

A few notes about the ports:

  •  8000 – non-SSL enabled proxy layer for API requests.
  • 8443 – SSL enabled proxy for API requests.
  • 8001 – RESTful Admin API for configuration. You will use this port to administrate your Kong installation.
  • 7946 – This is used for Kong clustering

Finally, call the admin endpoint to verify Kong is running:

$ curl

Add Open Weather Map API

First we need to create an API object in Kong to describe the API that we are going to expose to consumers.

Let’s go ahead and create the API using the administration API and port:


$ curl -i -X POST \
--url http://localhost:8001/apis/ \
--data '' \
--data 'upstream_url=' \
--data ''


"created_at": 1463423154000,
"id": "d736abb8-3a8b-451f-84b0-e85f5e53e907",
"name": "",
"preserve_host": false,
"request_host": "",
"strip_request_path": false,
"upstream_url": ""

Now let’s use the administration API to confirm the Weather API was added successfully:


$ curl --url http://localhost:8001/apis


"data": [
"created_at": 1463423154000,
"id": " d736abb8-3a8b-451f-84b0-e85f5e53e907",
"name": "",    
"preserve_host": false,
"request_host": "",
"strip_request_path": false,
"upstream_url": ""
"total": 1

Make a note of the “id” for this API (d736abb8-3a8b-451f-84b0-e85f5e53e907 in this example). We will be using this later to add a plugin to it.


If you are running on a Mac, you can pipe the output through Python to pretty print the JSON output:

$ curl --url http://localhost:8001/apis | python -m json.tool


Let’s go ahead and verify we can call the API we just registered. Be sure to replace <key> with the API key you created after registering with Open Weather Mapper.

Note that we are using the proxy requests on port 8000 to make the call to the API. Kong will forward the request to the “upstream_url” we defined when creating the API.

The “Host” header is used by Kong to know which API to forward to.


$ curl -v 'http://localhost:8000/weather?q=London&APPID=<key>' \
--header 'Host:'

Example response:

"base": "cmc stations",
"clouds": {
"all": 0
"cod": 200,
"coord": {
"lat": 51.51,
"lon": -0.13
"dt": 1463422447,
"id": 2643743,
"main": {
"grnd_level": 1022.25,
"humidity": 51,
"pressure": 1022.25,
"sea_level": 1032.14,
"temp": 290.005,
"temp_max": 290.005,
"temp_min": 290.005
"name": "London",
"sys": {
"country": "GB",
"message": 0.0153,
"sunrise": 1463371538,
"sunset": 1463428149
"weather": [
"description": "clear sky",
"icon": "01d",
"id": 800,
"main": "Clear"
"wind": {
"deg": 316.003,
"speed": 4.58    

Install Key Authentication Plugin

Now let’s start locking down the API. We don’t want non-authenticated clients accessing it! Here is an easy key authentication plugin for Kong:

To install the plugin, we use the following which attaches it to the API. Notice that we are using the “id” of the API we created from above:

$ curl -X POST http://localhost:8001/apis/d736abb8-3a8b-451f-84b0-e85f5e53e907/plugins \

--data "name=key-auth"


"api_id": "d736abb8-3a8b-451f-84b0-e85f5e53e907",
"config": {
"hide_credentials": false,
"key_names": [
"created_at": 1463497899000,
"enabled": true,
"id": "6ab3cbf1-8e43-4ce3-938f-b137415eee9b",
"name": "key-auth"

Test authentication now required

Now let’s make sure our API is secure. Make the following request again making sure you replace the “<key>” with your key:


$ curl -v 'http://localhost:8000/weather?q=London&APPID=<key>' \
--header 'Host:'

Notice we now get a 401 response:

< HTTP/1.1 401 Unauthorized

Add a Consumer

In order to use the API now, we will need to create a consumer and add a key.


$ curl -X POST http://localhost:8001/consumers --data "username=jack" --data "custom_id=1234"



Now let’s add a key to this new consumer. Note that I had to add an empty body to the POST otherwise Kong would return a 415 Unsupported Media Type:

$ curl -X POST http://localhost:8001/consumers/jack/key-auth --data ""


"consumer_id": "4ab3062d-b72b-40db-b6b9-e7c9f7dabee7",
"created_at": 1463498984000,
"id": "b92d911c-5604-4c37-8130-48a159004755",
"key": "a025d31f90eb48d6a4eb88cd22df6f98"

Note that the plugin auto-generated a key for us. Make note of the “key” as we will need that in order to call the API. We could have passed in a key in the body of the request if we wanted to.


Now let’s make sure this consumer can access the API using their API key generated from above. We need to pass in a new “apikey” header with the key.

$ curl -v 'http://localhost:8000/weather?q=London&APPID=<key>' \
--header 'Host:' \
--header ‘apikey: a025d31f90eb48d6a4eb88cd22df6f98’



I hope this simple tutorial has shown how Kong is easy to run (thanks Docker!) and manage using Kong management API. Kong plugins offer a lot of flexibility and customization for your APIs and the optional Kong Dashboard makes managing Kong a bit easier than having to use curl. Not too bad for an open source project!






Using Event Storming to break up Monolithic Architectures

Our Story

Over the past few years, Dev9 has emerged as a leader in the Pacific Northwest region. Many of our clients have engaged with us to help make the transition from large monolithic applications to a more manageable distributed suite of smaller applications. Before Dev9 is able to start developing these microservices, we first need to learn and understand the business and the associated domains. Dev9 has used various methods to identify fissures between domains within a monolithic application, yet one method has stood out as the most complete process that achieves the majority of objectives for a Discovery engagement.

     How Dev9 Uses Event Storming

Dev9 has been utilizing Event Storming during our Discovery engagements so we can build comprehensive business flows in hours instead of weeks. The models discovered during our sessions are valuable to both the business and the development teams. After attending a session, it is very common for our clients to come away learning more about their own business processes,  establish business goal alignment, establish mutual business domain understanding and tease out business models allowing development to start with a domain driven design approach.

     What is Event Storming?

In the early 2010’s Albeto Brandolini developed a business modeling process built on the principles of Game Storming. He called this new process “Event Storming”. The core concepts of Event Storming were conceived from within the Domain Driven Development (DDD) community and combines the facilitated session aspects of Game Storming with the objective of identifying DDD principles. The primary goals of Event Storming are to promote transparency in the business flows, develop a model based on domain events, understand the system by asking the right questions, establish mutual understanding of the business flows and have fun through a truly collaborative process.

     Who is Involved?

While Event Storming is often used by technical teams, Event Storming is NOT a technology oriented activity. Business “Domain Experts” (product owners, stakeholders, marketing, sales and QA members) are typically the best source of domain events. The primary goal is to model the business domain, not to confuse the domain with the technology used to implement the business requirements.

     Our Process

Dev9 has found that a simplistic adoption of the Event Storming process refined by Alberto and the DDD community has proven to be the most beneficial. When planning and setting up an Event Storming session, we make sure that a good cross-section of the business is invited, domain experts come prepared to engage in conversation and hack the work environment.

     Hack the Environment

The session starts by hacking the room that the session is going to take place. This means moving chairs and tables away from the wall where the action is to take place. Next, masking tape is used to put up a long sheet of paper which will be the workspace for the team to add their work and draw context bounds. It is important to have more room than you think will be needed to accommodate domain events growing in all directions and possible re-starts should a model not be expressed as cleanly as the team would like. The following checklist are what we bring to every Event Storming session:

* A variety of sticky notes including orange, yellow, blue, green, purple and large yellow sizes
* Roll of butcher paper
* Box of sharpie or felt-tipped markers
* Masking tape
* Easel (Optional) and Poster-board with quick reference instructions
* Star stickers or small bookmark sticky tabs

     Set Very Basic Rules

Once the room has been setup properly, the team will start filtering in. Event Storming is unique in that each session is different than the last depending on the team and company. In order to get the most out of our sessions, we have discovered that using a very minimal set of rules and guidance allows for the flexibility teams need to model their business flows. We like to start by explaining that the session will be open and collaborative, no sitting or standing off to the side, and we are interested in modeling the business flows by using the most basic business elements called “domain events”. We cover a few terms for clarity and alignment purposes.



A specified sphere of activity or knowledge – Wikipedia Recipes, Billing, Customers, etc

Domain Event

Something meaningful that happens in the domain: “added ingredient”, “sent invoice”, “updated email” Lastly, we explain that Event Storming consists of teasing out a domain, modeling and visualizing user interactions, domain events, actions, aggregates & conditionals, bound context and describing read models. In a subsequent post, we will walk through a fictitious event storming session and describe an instance of how Dev9 conducts an event storming session.


In this post we have exposed one of the processes Dev9 uses to decompose monolithic applications into their business domain components.  We believe that a solid effort in each organization's understanding of the domain will pay off many times more than approaching a domain from a technology or database level. Remember, “Software development is a learning process; working code is a side effect.” – Alberto Brandolini


Dev9 Announces Partnership With Hippo CMS for Web Content Management

Software Development Firm Selects Java-based, Open Source Software Provider to Deliver the Perfect Customer Journey to Clients

KIRKLAND, WA -- (Marketwired - April 27, 2016) - Dev9, a Kirkland, Washington-based custom software development company, is pleased to announce an implementation partnership with Hippo, a provider of Java-based, open source web content management software. After an exhaustive analysis of over 200 content management systems, Dev9 has identified Hippo as a modern, highly scalable, enterprise-grade content management system (CMS).

"Content management is fundamental to the operation of almost every business," Will Iverson, Dev9 Co-founder and Chief Technology Officer, said. "In working with enterprise clients in recent years, we found that traditional, large, expensive CMS platforms just weren't keeping up with modern content needs. We needed a go-to platform to recommend and deploy for our clients that would provide the flexibility and scalability necessary for modern content demands. Hippo CMS is that solution."

In searching for the perfect CMS platform, Dev9 identified five factors that were critical for success: support for continuous delivery best practices, content-as-a-service model for development, an enjoyable development experience, agility and Java-based security and scalability. Hippo CMS delivers on all of these critical needs.

Arjé Cahn, CTO and co-founder of Hippo: "We love how Dev9 understands the challenges of the modern enterprise in their digital transformation. It closely matches our product vision, where we focus on flexibility and agility, combined with our well known best-of-breed architecture, seamlessly integrating in any enterprise digital customer experience model. Dev9's focus on Continuous Delivery is a perfect fit with the Hippo product and it will greatly help their customers deliver on the digital transformation challenges they're facing."

Dev9 boasts extensive experience migrating and modernizing CMS packages for Fortune 100 companies. This includes system planning and installation, application cutover, SEO maintenance, content migration and analytics. Dev9 has expertise migrating systems that require integration with internal and/or external systems, as well as frequent deployments.

About Dev9
Dev9 is a custom software development firm focused on Java and JavaScript technologies. We build custom software solutions based on Continuous Delivery -- a set of processes and tools that leverages a combination of Lean principles and the heavy use of automation. Typical projects are web services at scale (e.g. Spring Boot), including integration with SQL, NoSQL and other enterprise systems. We also build client-side applications, integrating platforms such as AngularJS, Android and iOS. Developer-to-Operations and DevOps implementations often feature container strategy development (e.g. Docker).

Contact Dev9 to streamline your IT investment., (425) 296-2800.

About Hippo
Hippo is on a mission to make the digital experience more personable for every visitor. We're redefining the CMS space by engineering the world's most advanced content performance platform, designed to help businesses understand their visitors -- whether they are known or anonymous -- and deliver the content they value in any context and on any device. Together with its global network of Certified Partners, Hippo serves a rapidly growing number of enterprise clients around the world including Bell Aliant, Autodesk, Couchbase, the University of Maryland, the Dutch Foreign Office, Randstad, Veikkaus, NHS, 1&1 Internet, Bugaboo and Weleda.

For more information visit:
Follow us on Twitter: @OneHippo

5 Things We've Learned from being a Disruptive Tech Company

Dev9 is a custom software consulting company. We use Continuous Delivery - a system of automation and pipeline development - to deliver high quality software fast, frequently, and of high quality. Here are five lessons learned by our founders, Will Iverson and Matt Munson in working with our clients over the last six years:

  1. The Robots Are Coming: We help automate the production of software. Smart developers embrace the automation and love it. We are doing everything we can to liberate our engineers and clients from drudgery - we are trying to find the future, not fight it.
  2. Your Mission Drives Company Culture: It’s really hard to retrofit a culture to adopt a new way of doing business. Done poorly, it can kill a company. A lot of our bigger clients are using us specifically to relearn how to build software with automation. It’s fixing the passenger jet while it’s flying - but you have to invest to stay ahead.
  3. People Matter More When You Automate: It turns out that if you have a lot of manual processes, you start to treat the staff as robots! If you use robots for the boring manual stuff, you wind up talking to your coworkers more. The thoughts, opinions, and creative sides of your team have a lot more impact.
  4. Some People Are Empire Builders: Some people approach their career growth as a simple matter of having a lot of people reporting to them. Those people would rather have a hundred manual testers reporting to them than a team of twenty actual software engineers with a focus on testing. If someone believes that, no matter what happens they are never a good client - they would rather fire all the engineers and replace them with a huge manual test organization. These people are usually not managed to results, but just organization size.
  5. Automation is a Force Multiplier: Automation massively drives up productivity. It’s an order of magnitude difference in output. Once it becomes the new normal, nobody wants to go back.

How Does Automation in Continuous Delivery Affect Outsourced Software Development?

How We Got Here

Over the last few decades, there have been tremendous business incentives to move jobs to lower cost regions. This has been particularly true for software engineering - why not pay a lot less on an hourly rate for a developer in another country? Testing is a prime example - if you need to click through a website to make sure it all works on every operating system and every browser, cheap manual labor starts to look pretty compelling.

Over the last decade, many of the challenges of offshore development have become more commonly known. Hidden costs of offshore development include communication delays, cultural challenges, timezone problems, and uncertainty over the relationship between hours and value. Perhaps most significantly, it forces business leaders to ask questions they are ill-equipped to answer, such as “what is the true capability of a software engineering team?” or “how do I truly evaluate if a team that costs twice as much on an hourly basis is actually twice as productive?”

In manufacturing, the answer to this problem is turning out to be a combination of technological innovations and new processes - in particular, the use of automation. A manual job that was outsourced to another country is brought back to the US as an engineer running an automation system. What makes automation so attractive is not a simple matter of hourly cost savings - it’s a complete shift in quality and output when robotics are brought to bear. You simply can’t buy a handmade smart phone or laptop - human hands are insufficiently accurate.

From Manufacturing To Software

In the world of custom software development, Dev9 has combined industry-leading tools and processes to create a unique software development solution based on principles of automation. This allows Dev9 to provide custom software solutions at a fraction of the overall long term cost of a manual solution, while simultaneously delivering at the scale needed for customers.

Consider a website that needs to support millions of users accessing the system simultaneously. Perhaps it’s an entertainment site, or a travel booking system. The business wants to be able to test new features all the time - perhaps by exposing a small percentage of users to a new feature for a day to see if it impacts sales.

The most basic metric for measuring the quality of a Continuous Delivery-oriented team is the time it takes to do a deployment, including a complete test pass. A good Continuous Delivery organization will measure this in minutes. Traditional manual/waterfall organizations will measure this in weeks or months.

In a traditional engineering organization, the business would work with a development team and then that team would hand off the work to be tested by a large manual team. The process of adding even a simple change can take weeks or months. With Dev9, a small, highly proficient engineering team builds both the software and the test automation framework. This allows for rapid deployment with software robots performing tests at scale.

Even smaller projects benefit from automation. Consider a simple application, expected to be deployed to Mac OS, Windows, iOS and the countless Android devices currently on the market. A solo developer can benefit from a fleet of software robots, helping build, test, and distribute that application.

To be crystal clear, there is still a role for manual testing, but that manual testing is for higher value questions, like "is this feature intuitive" or "does this look good," not "does this workflow work on eight different browsers" or "will this software work if we have ten thousand users?"

Customer Demand For Precision

When customers engage with Dev9, a project starts by laying out a pipeline for software development, testing, and deployment. This pipeline is based on a combination of best practices, target platforms, scale, and a thorough analysis of any existing systems.

Common drivers for new projects include a need to scale, a desire to move to a cloud-based platform such as Amazon Web Services, a need to adopt new user client technologies such as mobile or Internet-of-Things devices, or just simply a need to move off an archaic system.

Whatever the client initiative, a common driver for seeking out Dev9 is a need for a high quality solution with a desire for a very collaborative, goal oriented team.

Ironically, once the automation pipeline is in place, it’s the ongoing relationship and collaboration that drives longer term engagements. Once a client gets used to working with a high performance, automation-focused team, it’s very common for clients to extend that working relationship to other projects and opportunities.

This pipeline and the associated processes are often described in the industry as Continuous Delivery. It’s not that the software is deployed multiple times a day, but that it is always ready for deployment.

Smaller Collaborative Teams

This is probably the most important aspect of Dev9’s model. By using smaller teams and leveraging automation, the real conversation turns back to solving business problems. If a standard team shifts from a remote team with 5 developers and 10 manual QA to a single integrated team of 7, that’s a huge optimization of management, people, and effort. It’s a lot easier to get the small team aligned, focused and delivering.

Probably the most basic metric for measuring the quality of a Continuous Delivery-oriented team is the time it takes to do a deployment, including a complete test pass. A good Continuous Delivery organization will measure this in minutes. Traditional manual/waterfall organizations will measure this in weeks or months. Imagine the opportunity costs alone in waiting for months to roll out new features.

Dev9 Named Consulting Partner in the Amazon Web Services Partner Network

Software Development Firm Expands Offerings to Help Organizations Take Advantage of Amazon's Cloud-Based Services

KIRKLAND, WA --(Marketwired - March 15, 2016) - Dev9, a Kirkland, Washington-based custom software development company, is pleased to announce that Amazon Web Services (AWS) has named the company a Consulting Partner in the AWS Partner Network (APN). APN Consulting Partners are professional service firms that help organizations design, architect, build, migrate, and manage their workloads and applications for AWS.

"Dev9 helps organizations build technology systems to streamline IT efforts and investments, and gives them the ability to scale and grow," Will Iverson, Dev9 Co-founder and Chief Technology Officer, said. "We are proud to have helped numerous organizations build software systems that make a difference to business success. AWS is an essential platform for enterprises who want to affordably, reliably, and easily scale their organization in the cloud."

The company's certified AWS developers are recognized as IT professionals who possess the skills and technical knowledge necessary for designing, deploying and managing applications on the AWS platform.

As a part of the APN, Dev9 has access to AWS resources and training to support customers as they deploy, run and manage applications on the AWS Cloud. With Dev9's help, organizations can experience a reduced-cost model, making what was once cost-prohibitive and disruptive to business possible. With extensive experience re-writing applications and platforms, Dev9 helps businesses with expensive, cumbersome and dated technology easily make the migration to AWS with little-to-no-downtime.

Learn more about Dev9's AWS service offerings

About Dev9

Dev9 is a custom software development firm focused on Java and JavaScript technologies. We build custom software solutions based on Continuous Delivery -- a set of processes and tools that leverages a combination of Lean principles and the heavy use of automation. Typical projects are web services at scale (e.g. Spring Boot), including integration with SQL, NoSQL and other enterprise systems. We also build client-side applications, integrating platforms such as AngularJS, Android and iOS. Developer-to-Operations and DevOps implementations often feature container strategy development (e.g. Docker). Contact us to streamline your IT investment., (425) 296-2800.



Shell scripting with AWS Lambda: The Function

In the previous article, I detailed how AWS Lambda can be used to act as a scripting control tool for other AWS services. The fact that it is focused on running individual functions, contains the AWS SDK by default, and only acrues costs when running create a perfect situation for the use of administrative scripting. In this article, I detail the use of Lambda functions to perform the cleaning itself.

Lambda functions are single JavaScript Node.js functions that are called by the Lambda engine. They take two parameters that provide information about the event that triggered the function call and the context the function is running under. It is important that these functions run as stateless services that do not depend on the underlying compute infrastructure. In addition, it is helpful to keep the function free of excessive setup code and dependencies to minimize the overhead of running functions.

The Code


var aws = require('aws-sdk');
var async = require('async');
var moment = require('moment');
var ec2 = new aws.EC2({apiVersion: '2014-10-01'});

The AWS SDK is available to all Lambda functions and we import and configure it for use with EC2 in this example. You can also include any Javascript library that you would with Node. I have included both the Async module and the Moment.js library for time.

Core Logic

var defaultTimeToLive = moment.duration(4, 'hours');

function shouldStop(instance) {
    var timeToLive = moment.duration(defaultTimeToLive.asMilliseconds());
    instance.Tags.forEach(function (tag) {
      if (tag.Key == 'permanent') {
        return false;
      } else if (tag.Key == "ttl-hours") {
        timeToLive = moment.duration(tag.Value, 'hours');

    var upTime = new Date().getTime() - instance.LaunchTime.getTime();

    if (upTime < timeToLive.asMilliseconds()) {
      console.log("Instance (" + instance.InstanceId + ") has " + timeToLive.humanize() + " remaining.");
      return false;
  return true;

I use the AWS tagging mechanism to drive the decision if an EC2 instance should be stopped. If the instance is tagged as 'permanent' or with sepecific 'ttl-hours' tag, then the function knows that it should be kept alive and for how long. If a tag wasn't added, we want to terminate those instance after a default time period. It might be helpful to have this externalized to an AWS configuration store such as SimpleDB, but I leave that as an exercise for the reader. Finally, it is helpful to log the amount of time the instances have left on their TTL.

Searching the instances

    function fetchEC2Instances(next) {
      var ec2Params = {
        Filters: [
          {Name: 'instance-state-name', Values: ['running']}

      ec2.describeInstances(ec2Params, function (err, data) {
        next(null, err, data)
    function filterInstances(err, data, next) {
      var stopList = [];

      data.Reservations.forEach(function (res) {
        res.Instances.forEach(function (instance) {
          if (shouldStop(instance)) {
      next(null, stopList);
    function stopInstances(stopList, next) {
      if (stopList.length > 0) {
        ec2.stopInstances({InstanceIds: stopList}, function (err, data) {
          if (err) {
          else {
      else {
        console.log("No instances need to be stopped");
  function (err) {
    if (err) {
      console.error('Failed to clean EC2 instances: ', err);
    } else {
      console.log('Successfully cleaned all unused EC2 instances.');

This should look familiar to everyone that has done Javascript AWS SDK work.  We use the Async library to query for running instances.  We then run the returned instance data through our helper method as a filter.  Finally, we take all of the identified instances and stop them. 

This code works well for a moderate number of running instances.  If you need to handle thousands of instances in your organization, you will need to adjust the fetch and stop processes to handle AWS SDK paging.  

You can find this code in our Github repository here:

Next Steps

The final piece of the puzzle for our Lambda scripting is deployment and scheduling.  In my final article on this, I will cover both how to deploy a Lambda function and the current, kludgy, method for job scheduling using EC2 autoscaling.

Tiered Testing of Microservices

There is a false challenge in testing a microservice. The application does not exist in isolation. It collaborates with other services in an interdependent web. How can one test a single strand of a web?

But test dependency management is not a new challenge. Using a microservice architecture increases the scale of the problem, and this forces a development team to address integration explicitly and strategically.

Common Terminology

Before discussing a testing strategy for microservices, we need a simple model with explicitly defined layers. Examples are given for RESTful implementations, but this model could be adapted for any transport format.

Figure 1: microservice structure

Figure 1: microservice structure

Resources handle incoming requests. They validate request format, delegate to services, and then package responses. All handling of the transport format for incoming requests is managed in resources. For a RESTful service, this would include deserialization of requests, authentication, serialization of responses, and mapping exceptions to http status codes.

Services handle business logic for the application. They may collaborate with other services, adapters, or repositories to retrieve needed data to fulfill a request or to execute commands. Services only consume and produce domain objects. They do not interact with DTOs from the persistence layer or transport layer objects – requests and responses in a RESTful service, for example.

Adapters handle outgoing requests to external services. They marshal requests, unmarshal responses, and map them to domain objects that can be used by services. They are usually only called by services. All handling of the transport format for outgoing requests is managed in adapters.

Repositories handle transactions with the persistence layer (generally databases) in much the same way that adapters handle interactions with external services. All handling of persistent dependencies is managed in this layer.

A lightweight microservice might combine one or more of the above layers in a single component, but separation of concerns will make unit testing much simpler.

Planning for Speed and Endurance

A test strategy in general should prevent unwelcome surprises in production. We want to get as much valuable quality-related information as we can (coverage), in realistic conditions (verisimilitude), as fast as we can (speed), and with as little bother as possible (simplicity).

Every test method has trade-offs. Unit testing will provide fast results for many scenarios and are usually built into the build process – they have good coverage, speed, and simplicity, but they aren’t very realistic. Manual user testing has the most verisimilitude and can be very simple to execute, but has very poor speed and coverage.

Tiered Testing Strategy

Tiered Testing Strategy

To balance these trade-offs, we use a tiered testing strategy. Tests at the bottom of the pyramid are generally fast, numerous, and executed frequently, while tests at the top of the tier are generally slow, few in number, and executed less frequently. This article focuses on how these tiers are applied for microservices. Unit Testing

Unit tests cover individual components. In a microservice, unit tests are most useful in the service layer, where they can verify business logic under controlled circumstances against conditions provided by mock collaborators. They are also useful in resources, repositories, and adapters for testing exceptional conditions – service failures, marshaling errors, etc.

Figure 2: Unit Testing Coverage

Figure 2: Unit Testing Coverage

To get the most value from unit tests, they need to be executed frequently – every build should run the tests, and a failed test should fail the build. This is configured on a continuous integration server (Jenkins, TeamCity, Bamboo, e.g.) constantly monitoring for changes in the code.
Service Testing

Service testing encompasses all tests of the microservice as a whole, in isolation. Service testing is also often called “functional testing”, but this can be confusing since most tiers described here are technically functional. The purpose of service tests is to verify integration for all components is functionally correct for all components that do not require external dependencies. To enable testing in isolation, we typically use mock components in place of the adapters and in-memory data sources for the repositories, configured under a separate profile. Tests are executed using the same technology that incoming requests would use (http for a RESTful microservice, for example).

Figure 3: Service Testing Coverage

Figure 3: Service Testing Coverage

A team could avoid using mock implementations of adapters at this tier by testing against mock external services with recorded responses. This is more realistic, but in practice it adds a great deal of complexity – recorded responses must be maintained for each service and updated for all collaborators whenever a service changes. It also requires deploying these mock collaborators alongside the system under test during automated service testing, which adds complexity to the build process. It’s easier to rely on a quick, robust system integration testing process with automated deployments to reduce the lag between these two tiers.

Service tests can also be run as part of the build process using most build tools, ensuring that the application not only compiles but can also be deployed in an in-memory container without issue. System Integration Testing

System integration tests verify how the microservice behaves in a functionally realistic environment – real databases, collaborators, load-balancers, etc. For the sake of simplicity, these are often also end-to-end tests – rather than writing a suite of system integration tests for each micoservice, we develop a suite for the entire ecosystem. In this tier, we are focused on testing configuration and integration using “normal” user flows.

Figure 4: System Integration Testing Coverage

Figure 4: System Integration Testing Coverage

This test suite is also functionally critical because it is the first realistic test of the adapter/repository layer, since we rely on mocks or embedded databases in the lower layers. Because integration with other microservices is so critical, it’s important that this testing process be streamlined as much as possible. This is where an automated release, deployment, and testing process provides tremendous advantages.

User Acceptance Testing

System integration tests verify that the entire web of microservices behaves correctly when used in the fashion the development team assumes it will be used (against explicit requirements). User acceptance replaces assumptions with actual user behavior. Ideally, users are given a set of goals to accomplish and a few scenarios test rather than explicit scripts.

Because user acceptance tests are often manual, this process is generally not automated (though it is possible, with crowd-sourcing). As a result, this can happen informally as part of sprint demoes, formally only for major releases, or through live A/B testing with actual users.

Non-functional Testing

Non-functional testing is a catchall term for tests that verify non-functional quality aspects: security, stability, and performance. While these tests are generally executed less frequently in a comprehensive manner, a sound goal is to try to infect the lower tiers with these aspects as well. For example, security can also be tested functionally (logging in with an invalid password, for example), but at some point it also needs to be tested as an end in itself (through security audits, penetration testing, port scanning, etc). As another example, performance testing can provide valuable information even during automated functional tests by setting thresholds for how long individual method calls may take, or during user acceptance testing by soliciting feedback on how the system responds to requests, but it also needs to be tested more rigorously against the system as a whole under realistic production load.

Ideally, these tests would be scheduled to run automatically following successful system integration testing, but this can be challenging if production-like environments are not always available or third-party dependencies are shared. Summation

The goal of the testing strategy, remember, is to be as fast, complete, realistic, and simple as possible. Each tier of testing adds complexity to development process. Complexity is a hidden cost that must be justified, and not just to project stakeholders – your future self will need to maintain them indefinitely.

This strategy can serve as a model for organizing your own tiered strategy for testing, modified as necessary for your context. If you’ve found new and interesting solutions to the problems discussed in this article, let me know at