Using Event Storming to break up Monolithic Architectures

Our Story

Over the past few years, Dev9 has emerged as a leader in the Pacific Northwest region. Many of our clients have engaged with us to help make the transition from large monolithic applications to a more manageable distributed suite of smaller applications. Before Dev9 is able to start developing these microservices, we first need to learn and understand the business and the associated domains. Dev9 has used various methods to identify fissures between domains within a monolithic application, yet one method has stood out as the most complete process that achieves the majority of objectives for a Discovery engagement.

     How Dev9 Uses Event Storming

Dev9 has been utilizing Event Storming during our Discovery engagements so we can build comprehensive business flows in hours instead of weeks. The models discovered during our sessions are valuable to both the business and the development teams. After attending a session, it is very common for our clients to come away learning more about their own business processes,  establish business goal alignment, establish mutual business domain understanding and tease out business models allowing development to start with a domain driven design approach.

     What is Event Storming?

In the early 2010’s Albeto Brandolini developed a business modeling process built on the principles of Game Storming. He called this new process “Event Storming”. The core concepts of Event Storming were conceived from within the Domain Driven Development (DDD) community and combines the facilitated session aspects of Game Storming with the objective of identifying DDD principles. The primary goals of Event Storming are to promote transparency in the business flows, develop a model based on domain events, understand the system by asking the right questions, establish mutual understanding of the business flows and have fun through a truly collaborative process.

     Who is Involved?

While Event Storming is often used by technical teams, Event Storming is NOT a technology oriented activity. Business “Domain Experts” (product owners, stakeholders, marketing, sales and QA members) are typically the best source of domain events. The primary goal is to model the business domain, not to confuse the domain with the technology used to implement the business requirements.


     Our Process

Dev9 has found that a simplistic adoption of the Event Storming process refined by Alberto and the DDD community has proven to be the most beneficial. When planning and setting up an Event Storming session, we make sure that a good cross-section of the business is invited, domain experts come prepared to engage in conversation and hack the work environment.

     Hack the Environment

The session starts by hacking the room that the session is going to take place. This means moving chairs and tables away from the wall where the action is to take place. Next, masking tape is used to put up a long sheet of paper which will be the workspace for the team to add their work and draw context bounds. It is important to have more room than you think will be needed to accommodate domain events growing in all directions and possible re-starts should a model not be expressed as cleanly as the team would like. The following checklist are what we bring to every Event Storming session:

* A variety of sticky notes including orange, yellow, blue, green, purple and large yellow sizes
* Roll of butcher paper
* Box of sharpie or felt-tipped markers
* Masking tape
* Easel (Optional) and Poster-board with quick reference instructions
* Star stickers or small bookmark sticky tabs


     Set Very Basic Rules

Once the room has been setup properly, the team will start filtering in. Event Storming is unique in that each session is different than the last depending on the team and company. In order to get the most out of our sessions, we have discovered that using a very minimal set of rules and guidance allows for the flexibility teams need to model their business flows. We like to start by explaining that the session will be open and collaborative, no sitting or standing off to the side, and we are interested in modeling the business flows by using the most basic business elements called “domain events”. We cover a few terms for clarity and alignment purposes.

Definitions

Domain

A specified sphere of activity or knowledge – Wikipedia Recipes, Billing, Customers, etc

Domain Event

Something meaningful that happens in the domain: “added ingredient”, “sent invoice”, “updated email” Lastly, we explain that Event Storming consists of teasing out a domain, modeling and visualizing user interactions, domain events, actions, aggregates & conditionals, bound context and describing read models. In a subsequent post, we will walk through a fictitious event storming session and describe an instance of how Dev9 conducts an event storming session.

     Conclusion

In this post we have exposed one of the processes Dev9 uses to decompose monolithic applications into their business domain components.  We believe that a solid effort in each organization's understanding of the domain will pay off many times more than approaching a domain from a technology or database level. Remember, “Software development is a learning process; working code is a side effect.” – Alberto Brandolini

 

Dev9 Announces Partnership With Hippo CMS for Web Content Management

Software Development Firm Selects Java-based, Open Source Software Provider to Deliver the Perfect Customer Journey to Clients


KIRKLAND, WA -- (Marketwired - April 27, 2016) - Dev9, a Kirkland, Washington-based custom software development company, is pleased to announce an implementation partnership with Hippo, a provider of Java-based, open source web content management software. After an exhaustive analysis of over 200 content management systems, Dev9 has identified Hippo as a modern, highly scalable, enterprise-grade content management system (CMS).

"Content management is fundamental to the operation of almost every business," Will Iverson, Dev9 Co-founder and Chief Technology Officer, said. "In working with enterprise clients in recent years, we found that traditional, large, expensive CMS platforms just weren't keeping up with modern content needs. We needed a go-to platform to recommend and deploy for our clients that would provide the flexibility and scalability necessary for modern content demands. Hippo CMS is that solution."

In searching for the perfect CMS platform, Dev9 identified five factors that were critical for success: support for continuous delivery best practices, content-as-a-service model for development, an enjoyable development experience, agility and Java-based security and scalability. Hippo CMS delivers on all of these critical needs.

Arjé Cahn, CTO and co-founder of Hippo: "We love how Dev9 understands the challenges of the modern enterprise in their digital transformation. It closely matches our product vision, where we focus on flexibility and agility, combined with our well known best-of-breed architecture, seamlessly integrating in any enterprise digital customer experience model. Dev9's focus on Continuous Delivery is a perfect fit with the Hippo product and it will greatly help their customers deliver on the digital transformation challenges they're facing."

Dev9 boasts extensive experience migrating and modernizing CMS packages for Fortune 100 companies. This includes system planning and installation, application cutover, SEO maintenance, content migration and analytics. Dev9 has expertise migrating systems that require integration with internal and/or external systems, as well as frequent deployments.

About Dev9
Dev9 is a custom software development firm focused on Java and JavaScript technologies. We build custom software solutions based on Continuous Delivery -- a set of processes and tools that leverages a combination of Lean principles and the heavy use of automation. Typical projects are web services at scale (e.g. Spring Boot), including integration with SQL, NoSQL and other enterprise systems. We also build client-side applications, integrating platforms such as AngularJS, Android and iOS. Developer-to-Operations and DevOps implementations often feature container strategy development (e.g. Docker).

Contact Dev9 to streamline your IT investment.
info@dev9.com, (425) 296-2800.

About Hippo
Hippo is on a mission to make the digital experience more personable for every visitor. We're redefining the CMS space by engineering the world's most advanced content performance platform, designed to help businesses understand their visitors -- whether they are known or anonymous -- and deliver the content they value in any context and on any device. Together with its global network of Certified Partners, Hippo serves a rapidly growing number of enterprise clients around the world including Bell Aliant, Autodesk, Couchbase, the University of Maryland, the Dutch Foreign Office, Randstad, Veikkaus, NHS, 1&1 Internet, Bugaboo and Weleda.

For more information visit: www.onehippo.com.
Follow us on Twitter: @OneHippo

5 Things We've Learned from being a Disruptive Tech Company

Dev9 is a custom software consulting company. We use Continuous Delivery - a system of automation and pipeline development - to deliver high quality software fast, frequently, and of high quality. Here are five lessons learned by our founders, Will Iverson and Matt Munson in working with our clients over the last six years:

  1. The Robots Are Coming: We help automate the production of software. Smart developers embrace the automation and love it. We are doing everything we can to liberate our engineers and clients from drudgery - we are trying to find the future, not fight it.
  2. Your Mission Drives Company Culture: It’s really hard to retrofit a culture to adopt a new way of doing business. Done poorly, it can kill a company. A lot of our bigger clients are using us specifically to relearn how to build software with automation. It’s fixing the passenger jet while it’s flying - but you have to invest to stay ahead.
  3. People Matter More When You Automate: It turns out that if you have a lot of manual processes, you start to treat the staff as robots! If you use robots for the boring manual stuff, you wind up talking to your coworkers more. The thoughts, opinions, and creative sides of your team have a lot more impact.
  4. Some People Are Empire Builders: Some people approach their career growth as a simple matter of having a lot of people reporting to them. Those people would rather have a hundred manual testers reporting to them than a team of twenty actual software engineers with a focus on testing. If someone believes that, no matter what happens they are never a good client - they would rather fire all the engineers and replace them with a huge manual test organization. These people are usually not managed to results, but just organization size.
  5. Automation is a Force Multiplier: Automation massively drives up productivity. It’s an order of magnitude difference in output. Once it becomes the new normal, nobody wants to go back.

How Does Automation in Continuous Delivery Affect Outsourced Software Development?

How We Got Here

Over the last few decades, there have been tremendous business incentives to move jobs to lower cost regions. This has been particularly true for software engineering - why not pay a lot less on an hourly rate for a developer in another country? Testing is a prime example - if you need to click through a website to make sure it all works on every operating system and every browser, cheap manual labor starts to look pretty compelling.

Over the last decade, many of the challenges of offshore development have become more commonly known. Hidden costs of offshore development include communication delays, cultural challenges, timezone problems, and uncertainty over the relationship between hours and value. Perhaps most significantly, it forces business leaders to ask questions they are ill-equipped to answer, such as “what is the true capability of a software engineering team?” or “how do I truly evaluate if a team that costs twice as much on an hourly basis is actually twice as productive?”

In manufacturing, the answer to this problem is turning out to be a combination of technological innovations and new processes - in particular, the use of automation. A manual job that was outsourced to another country is brought back to the US as an engineer running an automation system. What makes automation so attractive is not a simple matter of hourly cost savings - it’s a complete shift in quality and output when robotics are brought to bear. You simply can’t buy a handmade smart phone or laptop - human hands are insufficiently accurate.

From Manufacturing To Software

In the world of custom software development, Dev9 has combined industry-leading tools and processes to create a unique software development solution based on principles of automation. This allows Dev9 to provide custom software solutions at a fraction of the overall long term cost of a manual solution, while simultaneously delivering at the scale needed for customers.

Consider a website that needs to support millions of users accessing the system simultaneously. Perhaps it’s an entertainment site, or a travel booking system. The business wants to be able to test new features all the time - perhaps by exposing a small percentage of users to a new feature for a day to see if it impacts sales.

The most basic metric for measuring the quality of a Continuous Delivery-oriented team is the time it takes to do a deployment, including a complete test pass. A good Continuous Delivery organization will measure this in minutes. Traditional manual/waterfall organizations will measure this in weeks or months.

In a traditional engineering organization, the business would work with a development team and then that team would hand off the work to be tested by a large manual team. The process of adding even a simple change can take weeks or months. With Dev9, a small, highly proficient engineering team builds both the software and the test automation framework. This allows for rapid deployment with software robots performing tests at scale.

Even smaller projects benefit from automation. Consider a simple application, expected to be deployed to Mac OS, Windows, iOS and the countless Android devices currently on the market. A solo developer can benefit from a fleet of software robots, helping build, test, and distribute that application.

To be crystal clear, there is still a role for manual testing, but that manual testing is for higher value questions, like "is this feature intuitive" or "does this look good," not "does this workflow work on eight different browsers" or "will this software work if we have ten thousand users?"

Customer Demand For Precision

When customers engage with Dev9, a project starts by laying out a pipeline for software development, testing, and deployment. This pipeline is based on a combination of best practices, target platforms, scale, and a thorough analysis of any existing systems.

Common drivers for new projects include a need to scale, a desire to move to a cloud-based platform such as Amazon Web Services, a need to adopt new user client technologies such as mobile or Internet-of-Things devices, or just simply a need to move off an archaic system.

Whatever the client initiative, a common driver for seeking out Dev9 is a need for a high quality solution with a desire for a very collaborative, goal oriented team.

Ironically, once the automation pipeline is in place, it’s the ongoing relationship and collaboration that drives longer term engagements. Once a client gets used to working with a high performance, automation-focused team, it’s very common for clients to extend that working relationship to other projects and opportunities.

This pipeline and the associated processes are often described in the industry as Continuous Delivery. It’s not that the software is deployed multiple times a day, but that it is always ready for deployment.

Smaller Collaborative Teams

This is probably the most important aspect of Dev9’s model. By using smaller teams and leveraging automation, the real conversation turns back to solving business problems. If a standard team shifts from a remote team with 5 developers and 10 manual QA to a single integrated team of 7, that’s a huge optimization of management, people, and effort. It’s a lot easier to get the small team aligned, focused and delivering.

Probably the most basic metric for measuring the quality of a Continuous Delivery-oriented team is the time it takes to do a deployment, including a complete test pass. A good Continuous Delivery organization will measure this in minutes. Traditional manual/waterfall organizations will measure this in weeks or months. Imagine the opportunity costs alone in waiting for months to roll out new features.

Dev9 Named Consulting Partner in the Amazon Web Services Partner Network

Software Development Firm Expands Offerings to Help Organizations Take Advantage of Amazon's Cloud-Based Services

KIRKLAND, WA --(Marketwired - March 15, 2016) - Dev9, a Kirkland, Washington-based custom software development company, is pleased to announce that Amazon Web Services (AWS) has named the company a Consulting Partner in the AWS Partner Network (APN). APN Consulting Partners are professional service firms that help organizations design, architect, build, migrate, and manage their workloads and applications for AWS.

"Dev9 helps organizations build technology systems to streamline IT efforts and investments, and gives them the ability to scale and grow," Will Iverson, Dev9 Co-founder and Chief Technology Officer, said. "We are proud to have helped numerous organizations build software systems that make a difference to business success. AWS is an essential platform for enterprises who want to affordably, reliably, and easily scale their organization in the cloud."

The company's certified AWS developers are recognized as IT professionals who possess the skills and technical knowledge necessary for designing, deploying and managing applications on the AWS platform.

As a part of the APN, Dev9 has access to AWS resources and training to support customers as they deploy, run and manage applications on the AWS Cloud. With Dev9's help, organizations can experience a reduced-cost model, making what was once cost-prohibitive and disruptive to business possible. With extensive experience re-writing applications and platforms, Dev9 helps businesses with expensive, cumbersome and dated technology easily make the migration to AWS with little-to-no-downtime.

Learn more about Dev9's AWS service offerings

About Dev9

Dev9 is a custom software development firm focused on Java and JavaScript technologies. We build custom software solutions based on Continuous Delivery -- a set of processes and tools that leverages a combination of Lean principles and the heavy use of automation. Typical projects are web services at scale (e.g. Spring Boot), including integration with SQL, NoSQL and other enterprise systems. We also build client-side applications, integrating platforms such as AngularJS, Android and iOS. Developer-to-Operations and DevOps implementations often feature container strategy development (e.g. Docker). Contact us to streamline your IT investment. info@dev9.com, (425) 296-2800.

 

 

Shell scripting with AWS Lambda: The Function

In the previous article, I detailed how AWS Lambda can be used to act as a scripting control tool for other AWS services. The fact that it is focused on running individual functions, contains the AWS SDK by default, and only acrues costs when running create a perfect situation for the use of administrative scripting. In this article, I detail the use of Lambda functions to perform the cleaning itself.

Lambda functions are single JavaScript Node.js functions that are called by the Lambda engine. They take two parameters that provide information about the event that triggered the function call and the context the function is running under. It is important that these functions run as stateless services that do not depend on the underlying compute infrastructure. In addition, it is helpful to keep the function free of excessive setup code and dependencies to minimize the overhead of running functions.

The Code

Imports

var aws = require('aws-sdk');
var async = require('async');
var moment = require('moment');
 
var ec2 = new aws.EC2({apiVersion: '2014-10-01'});

The AWS SDK is available to all Lambda functions and we import and configure it for use with EC2 in this example. You can also include any Javascript library that you would with Node. I have included both the Async module and the Moment.js library for time.

Core Logic

var defaultTimeToLive = moment.duration(4, 'hours');

function shouldStop(instance) {
    var timeToLive = moment.duration(defaultTimeToLive.asMilliseconds());
    instance.Tags.forEach(function (tag) {
      if (tag.Key == 'permanent') {
        return false;
      } else if (tag.Key == "ttl-hours") {
        timeToLive = moment.duration(tag.Value, 'hours');
      }
    });

    var upTime = new Date().getTime() - instance.LaunchTime.getTime();

    if (upTime < timeToLive.asMilliseconds()) {
      timeToLive.subtract(upTime);
      console.log("Instance (" + instance.InstanceId + ") has " + timeToLive.humanize() + " remaining.");
      return false;
    }
  return true;
}

I use the AWS tagging mechanism to drive the decision if an EC2 instance should be stopped. If the instance is tagged as 'permanent' or with sepecific 'ttl-hours' tag, then the function knows that it should be kept alive and for how long. If a tag wasn't added, we want to terminate those instance after a default time period. It might be helpful to have this externalized to an AWS configuration store such as SimpleDB, but I leave that as an exercise for the reader. Finally, it is helpful to log the amount of time the instances have left on their TTL.

Searching the instances

async.waterfall([
    function fetchEC2Instances(next) {
      var ec2Params = {
        Filters: [
          {Name: 'instance-state-name', Values: ['running']}
        ]
      };

      ec2.describeInstances(ec2Params, function (err, data) {
        next(null, err, data)
      });
    },
    function filterInstances(err, data, next) {
      var stopList = [];

      data.Reservations.forEach(function (res) {
        res.Instances.forEach(function (instance) {
          if (shouldStop(instance)) {
            stopList.push(instance.InstanceId);
          }
        });
      });
      next(null, stopList);
    },
    function stopInstances(stopList, next) {
      if (stopList.length > 0) {
        ec2.stopInstances({InstanceIds: stopList}, function (err, data) {
          if (err) {
            nex(err);
          }
          else {
            console.log(data);
            next(null);
          }
        });
      }
      else {
        console.log("No instances need to be stopped");
        next(null);
      }
    }
  ],
  function (err) {
    if (err) {
      console.error('Failed to clean EC2 instances: ', err);
    } else {
      console.log('Successfully cleaned all unused EC2 instances.');
    }
    context.done(err);
  });

This should look familiar to everyone that has done Javascript AWS SDK work.  We use the Async library to query for running instances.  We then run the returned instance data through our helper method as a filter.  Finally, we take all of the identified instances and stop them. 

This code works well for a moderate number of running instances.  If you need to handle thousands of instances in your organization, you will need to adjust the fetch and stop processes to handle AWS SDK paging.  

You can find this code in our Github repository here: https://github.com/dev9com/lambda-cleanup/blob/master/lambda-node/CostWatch.js.

Next Steps

The final piece of the puzzle for our Lambda scripting is deployment and scheduling.  In my final article on this, I will cover both how to deploy a Lambda function and the current, kludgy, method for job scheduling using EC2 autoscaling.

Tiered Testing of Microservices

There is a false challenge in testing a microservice. The application does not exist in isolation. It collaborates with other services in an interdependent web. How can one test a single strand of a web?

But test dependency management is not a new challenge. Using a microservice architecture increases the scale of the problem, and this forces a development team to address integration explicitly and strategically.

Common Terminology

Before discussing a testing strategy for microservices, we need a simple model with explicitly defined layers. Examples are given for RESTful implementations, but this model could be adapted for any transport format.

Figure 1: microservice structure

Figure 1: microservice structure

Resources handle incoming requests. They validate request format, delegate to services, and then package responses. All handling of the transport format for incoming requests is managed in resources. For a RESTful service, this would include deserialization of requests, authentication, serialization of responses, and mapping exceptions to http status codes.

Services handle business logic for the application. They may collaborate with other services, adapters, or repositories to retrieve needed data to fulfill a request or to execute commands. Services only consume and produce domain objects. They do not interact with DTOs from the persistence layer or transport layer objects – requests and responses in a RESTful service, for example.

Adapters handle outgoing requests to external services. They marshal requests, unmarshal responses, and map them to domain objects that can be used by services. They are usually only called by services. All handling of the transport format for outgoing requests is managed in adapters.

Repositories handle transactions with the persistence layer (generally databases) in much the same way that adapters handle interactions with external services. All handling of persistent dependencies is managed in this layer.

A lightweight microservice might combine one or more of the above layers in a single component, but separation of concerns will make unit testing much simpler.

Planning for Speed and Endurance

A test strategy in general should prevent unwelcome surprises in production. We want to get as much valuable quality-related information as we can (coverage), in realistic conditions (verisimilitude), as fast as we can (speed), and with as little bother as possible (simplicity).

Every test method has trade-offs. Unit testing will provide fast results for many scenarios and are usually built into the build process – they have good coverage, speed, and simplicity, but they aren’t very realistic. Manual user testing has the most verisimilitude and can be very simple to execute, but has very poor speed and coverage.

Tiered Testing Strategy

Tiered Testing Strategy

To balance these trade-offs, we use a tiered testing strategy. Tests at the bottom of the pyramid are generally fast, numerous, and executed frequently, while tests at the top of the tier are generally slow, few in number, and executed less frequently. This article focuses on how these tiers are applied for microservices. Unit Testing

Unit tests cover individual components. In a microservice, unit tests are most useful in the service layer, where they can verify business logic under controlled circumstances against conditions provided by mock collaborators. They are also useful in resources, repositories, and adapters for testing exceptional conditions – service failures, marshaling errors, etc.

Figure 2: Unit Testing Coverage

Figure 2: Unit Testing Coverage

To get the most value from unit tests, they need to be executed frequently – every build should run the tests, and a failed test should fail the build. This is configured on a continuous integration server (Jenkins, TeamCity, Bamboo, e.g.) constantly monitoring for changes in the code.
Service Testing

Service testing encompasses all tests of the microservice as a whole, in isolation. Service testing is also often called “functional testing”, but this can be confusing since most tiers described here are technically functional. The purpose of service tests is to verify integration for all components is functionally correct for all components that do not require external dependencies. To enable testing in isolation, we typically use mock components in place of the adapters and in-memory data sources for the repositories, configured under a separate profile. Tests are executed using the same technology that incoming requests would use (http for a RESTful microservice, for example).

Figure 3: Service Testing Coverage

Figure 3: Service Testing Coverage

A team could avoid using mock implementations of adapters at this tier by testing against mock external services with recorded responses. This is more realistic, but in practice it adds a great deal of complexity – recorded responses must be maintained for each service and updated for all collaborators whenever a service changes. It also requires deploying these mock collaborators alongside the system under test during automated service testing, which adds complexity to the build process. It’s easier to rely on a quick, robust system integration testing process with automated deployments to reduce the lag between these two tiers.

Service tests can also be run as part of the build process using most build tools, ensuring that the application not only compiles but can also be deployed in an in-memory container without issue. System Integration Testing

System integration tests verify how the microservice behaves in a functionally realistic environment – real databases, collaborators, load-balancers, etc. For the sake of simplicity, these are often also end-to-end tests – rather than writing a suite of system integration tests for each micoservice, we develop a suite for the entire ecosystem. In this tier, we are focused on testing configuration and integration using “normal” user flows.

Figure 4: System Integration Testing Coverage

Figure 4: System Integration Testing Coverage

This test suite is also functionally critical because it is the first realistic test of the adapter/repository layer, since we rely on mocks or embedded databases in the lower layers. Because integration with other microservices is so critical, it’s important that this testing process be streamlined as much as possible. This is where an automated release, deployment, and testing process provides tremendous advantages.

User Acceptance Testing

System integration tests verify that the entire web of microservices behaves correctly when used in the fashion the development team assumes it will be used (against explicit requirements). User acceptance replaces assumptions with actual user behavior. Ideally, users are given a set of goals to accomplish and a few scenarios test rather than explicit scripts.

Because user acceptance tests are often manual, this process is generally not automated (though it is possible, with crowd-sourcing). As a result, this can happen informally as part of sprint demoes, formally only for major releases, or through live A/B testing with actual users.

Non-functional Testing

Non-functional testing is a catchall term for tests that verify non-functional quality aspects: security, stability, and performance. While these tests are generally executed less frequently in a comprehensive manner, a sound goal is to try to infect the lower tiers with these aspects as well. For example, security can also be tested functionally (logging in with an invalid password, for example), but at some point it also needs to be tested as an end in itself (through security audits, penetration testing, port scanning, etc). As another example, performance testing can provide valuable information even during automated functional tests by setting thresholds for how long individual method calls may take, or during user acceptance testing by soliciting feedback on how the system responds to requests, but it also needs to be tested more rigorously against the system as a whole under realistic production load.

Ideally, these tests would be scheduled to run automatically following successful system integration testing, but this can be challenging if production-like environments are not always available or third-party dependencies are shared. Summation

The goal of the testing strategy, remember, is to be as fast, complete, realistic, and simple as possible. Each tier of testing adds complexity to development process. Complexity is a hidden cost that must be justified, and not just to project stakeholders – your future self will need to maintain them indefinitely.

This strategy can serve as a model for organizing your own tiered strategy for testing, modified as necessary for your context. If you’ve found new and interesting solutions to the problems discussed in this article, let me know at david.drake@dev9.com.

TEALS

For the past year, my friend Lester Jackson has been volunteering at Manson High School in Central Washington by remotely teaching Computer Science through a Microsoft Youth Spark program named TEALS.

Lester has always been super passionate of improving computer literacy, especially in unrepresented communities. Several other volunteers and Lester work with an experienced high school teacher, and come in before work a few days a week to teach CS in their assigned high school 1 to 2 days a week.

Why does Lester do it?

According to a 2013 study by Code.org, 90% of US high schools do not teach computer science. With software engineers in high demand in the private sector, schools often cannot find instructors with a computer science background, and struggle to compete with the compensation packages offered in industry. Even more staggering are the following statistics:

•Less than 2.4% of college students graduate with a degree in computer science and the numbers have dropped since the last decade

•Exposure to CS leads to some of the best paying jobs in the world. But 75% of our population is underrepresented

•In 2012, fewer than 3,000 African Americans and Hispanic students took the high school A.P. computer science exam

•While 57% of bachelor’s degrees are earned by women, just 12% of computer science degrees are awarded to women

•In 25 of 50 US states, computer science doesn’t count towards high school graduation math or science requirements Source: Code.org

The program needs more volunteers for next year. Here is how you can get involved:

http://c.tealsk12.org/l/249

#TeachCS

Shell scripting with AWS Lambda

One of the newest pieces of the AWS toolkit is the Lambda compute engine.  Lambda provides an ability to deploy small bits of code that run independently as functions.  AWS only charges for the time that these snippets are running based on the resources requested to run the code.  This allows for extremely granular use of compute resources.

Previously, billing for general-purpose compute power was only in increments of one hour using EC2.  This was true even for managed aspects of EC2: Elastic Beanstalk, RDS, Elastic Load Balancing, etc.  This is not to say that there were no services that charged under a different model.  Many other non-compute specific services such as S3, SQS, or Kinesis are based on a per usage model.  The 100 millisecond-pricing model introduced with Lambda provides something that feels a great deal like per usage pricing for general compute.

Since Lambda functions are small and charged only when in use, Lambda encourages a model where development items can be deployed as many single functions rather than as more monolithic single server applications.  This is the Unix command line philosophy applied to the cloud.    It encourages the developers to focus on purpose built tools and interaction between components.  Just as when you build a shell script expecting it to interact with other shell scripts to achieve a larger task.

This parallel with shell scripting is also interesting with regards to administration of your AWS cloud infrastructure as a whole.   The Lambda Node functions contain the JavaScript AWS API by default.  Building functions to perform cloud maintenance, provisioning, or other scripting is now easily performed within the cloud itself.  I realize that you could always deploy an EC2 instance that contained scripts for this purpose in any language needed.  This is a very heavyweight approach for a simple activity that is likely to only run for a few minutes.  You incur not only the cost of the instance but all of the development work to get the instance setup and ready to run scripts.  Lambda does this all for you. 

It is better to produce a series of scripts to manage specific aspects your AWS infrastructure.  This is how shell scripts are used on Linux systems to manage aspects of the running machine.  A Lambda script can perform a single activity or when needed can be chained together to perform a series of actions.  Lambda also comes with a couple of different external triggers: S3 and SQS.  This allows your Lambda scripts to respond to actions occurring with other AWS or external applications that interface with these tools.

This is all supported by the fact that Lambda is secured using IAM roles.  This reduces the use of AWS credentials and allows you to pinpoint the AWS products, or even specific instances within the product, that the script has permission to access.  This security helps minimize the chance that a buggy script causes any issues outside of its allowed domain.  It is again similar to shell scripting permissions to reduce possible script exposure.

All of this points to Lambda as a great tool for managing AWS infrastructure in addition to other compute tasks that you may want to use it for.  In the next article in this series, I’ll  use Lambda to stop existing EC2 instances that are not in use. Stay tuned.

Beyond Code Coverage - Mutation Testing

Code coverage is a metric most of us track and manage. The basic idea is that when you run the tests, you indicate which lines and branches are executed. Then, at the end, you come up with a percentage of lines that were executed during a test run. If a line of code was never executed, that means it was never tested. A typical report might look something like this:

Code Coverage Results

You can see the line coverage and branch coverage in that graphic.

Typically, we consider 80% or above to be acceptable coverage, and 95% or above to be "great" coverage.

Lies, Damn Lies, and Statistics

However, coverage numbers are not always enough. Here's an example class and test case, let's see where the problem is:

So here we have a calculator, and we're testing the add method. We've tested that if we add two zeroes together, we get zero as a result. A standard code coverage would call this 100% covered. So where's the problem?

Codifying Behavior

What happens if somebody accidentally changed the addition to subtraction in the original method? Something like this:

The test case will still pass, and we will still have 100% coverage, but we can tell that the result is just wrong.

But, it can get more insidious than this. How about this example:

The lines in the constructor count towards the code coverage, despite us never validating them. What happens if somebody goes and changes the radix to 2? The tests will still pass, and the app will be completely wrong!

While additional tests can fix these concerns, how do we know if we have comprehensive coverage?

Mutations

Imagine that we took the code above and did a couple things:

  • Changed all the constants to MAX_INT -- including radix
  • Changed addition to subtraction

Now, these should be breaking changes. But what happens if you make those changes, and your tests still pass? That means you're not testing it well enough! But if your tests fail or throw an error, then you are testing it.

This is the basic idea behind mutation testing. By inspecting the bytecode, we can apply transformations to the code, and re-run the test suite. We can then count the mutations that produced breaking changes, and the mutations that did not produce breaking changes -- we will call those "killed" and "survived".

Your goal, then will be to maximize the "killed" stat, and minimize the "survived" stat. We have a Maven Site Docs Example, and I have wired in the pitest library, which provides the mutation framework. The output looks a little something like this:

pitest output 1

Those red lines are places where the mutations survived -- and the green ones are where mutations were killed. Highlighting or clicking the number on the left shows this portion:

pitest mutations applied

Mutations Available

Each language and framework is unique, but here are some examples of before and after:

Of course, this isn't limited to just equality mutations. We can also do stuff like this:

In this case, we are removing a method call that does not return a value. If your code still passes after this, why does that method call exist?

You should check out full list of mutations available in pitest.

Conclusion

This method of testing is obviously more comprehensive, but it comes at a cost: time. Making all of these mutations and running the test suite can take significantly longer. Each mutation requires a run of the whole test suite, to make sure it was killed. Obviously the larger your code base, the more mutations are needed, and the more test runs are needed. You can tweak things like the number of threads used to run the suite, the mutations available, and the classes targeted. Except for the thread count, the others will reduce the overall comprehensiveness of testing.

Now you know another tool for quality. You can see how we wired it up in our Maven Site Docs example, and how we integrated it into the maven lifecycle.