contact us

Use the form on the right to contact us.

You can edit the text in this area, and change where the contact form on the right submits to, by entering edit mode using the modes on the bottom right.

Chicago, IL
United States

This is a place for me to dump my occasional musings. Most of them will be about technology and software development but there might be the occasional random topic.

My hopes are that this blog will make me a better communicator and a better developer. I hope that at least a few people stumble across it, a few of those actually read it, and that a few of those gain something from it.

 I want this to be a learning tool for myself and others, so if you can add something to the discussion please comment.


A software engineer, technologist, and aspiring entrepreneur. Currently spends his days coding coupons and the color green at Groupon.

Roommates Suck

Matthew Moore


If you have ever had roommates odds are that you've had one that you didn't like. Even if you got lucky and you liked all your roommates there were probably moments where homicide wasn't an option you had totally ruled out as a solution to the dirty dishes problem. And we have all heard stories about best friends that decided to room together and now they don't speak anymore.

Applications Are People Too

Well, not really; but they don't always like cohabitating like people do. Just like people, software applications have needs and when one application tramples on the others needs they tend to fight like college roomates.

Software applications needs, what we call dependencies, can vary greatly. They depend on access to system resources like CPU, memory, block I/O, and networking. They depend on the presence specific versions of other software like Java 7 or MySQL 5.5. And god help you if have two applications that both need the same version of the same package, but require different and incompatible compile time options for that package.

The package dependencies can become difficult to manage and the interactions complicate debugging if we try to deploy multiple applications to a single machine. We also have to deal with resource contention; if one of our web applications is getting lots a traffic and saturates our host's network connection our co-hosted applications performance will suffer, even though they are under normal load.

Despite all of that, it is often impractical to run every application on its own machine because of cost. So, how do we escape this hell?

The Canonical Approach

In the spring of 1546 christian scholars were gathered by the Catholic Church in Trent, Italy and charged with identifying which prophetic texts were truly inspired by god. The texts that they selected, canonized, became what we know as the modern bible. With the exception of changes due to retranslation and some denominations that have added books, the bible you'd find at a church altar has the same contents as a bible from 450 years ago.

This is probably the most common solution to this problem and many software shops use this same approach to manage their dependencies and interactions. Certain versions of software packages, tools, and languages are blessed by the Council of Senior Engineers and only those are used to build applications. While this can be effective, it only simplifies not solves the problem.

The largest shortcoming is that when something new comes out that you want to use, if its dependencies are not found in our canon, you can't. If you actually need to use this new thing then you have to tackle updating everything else to work with the new dependencies you are introducing. This can quickly spiral out of control as the dependencies are software themselves and may require newer versions of other packages that also conflict with our environment's sacred configuration.

This approach has not solved our problem, we are still having to manage the interactions between dependencies. The interactions still exist, they are just well understood and, hopefully, stable so that , unless we are upgrading something, they can be largely ignored.


So We Should Use VMs Then?

Yes and No. Virtual machines do solve this problem but they add the non trivial overhead of hardware emulation, there is better way.

What About Process VMs? They Don't Have To Emulate Hardware.

While that is true, the problem with process VMs is in their name. We are still a process running on a host platform and therefore have the same problems as any other process. We are competing for resources and can have conflicting dependencies.

We might be tempted to solve this problem by deploying multiple applications to a single instance of our process VM that uses up all of the systems resources, but then we'd realize that this has undone all of our hard work. We are, again, sharing system resources between these applications.

Never Fear Containers are Here!

Software containers, e.g. LXC, FreeBSD jails, Solaris zones, and the like, solve this problem nicely. They let us prevent resource contention by limiting each applications access to system level resources like CPU, memory, block I/O, and networking and manage dependency conflicts by isolating each applications view of everything in the execution environment from process trees to the mounted file systems.

Combining this resource isolation with copy-on-write file systems like ZFS and BTRFS gives us excellent disk and memory performance characteristics. In addition to being more space efficient because multiple containers can share the same file, until one of them changes it, caching is far simpler.

We could have had all of these things when we were running in a VM, so why are containers better? First, we are running as a native process in the host OS, no hardware emulation overhead and no hypervisor to manage. Second, containers are much more portable than VMs are because of their size. VMs are often a few gigabytes in size because they contain all the information needed simulate an entire piece of hardware. A container will be much smaller than the VM image shipping the same piece of software.

But Mom... Containers Are Hard!

Docker commoditize the use of containers as a software deployment mechanism. It abstracts away much of the underlying craziness and gives us other awesome things like container versioning, inheritance, and a way to share containers with others.

What Dependencies Am We Left With?

Because containers are kernel level primitives you are only dependent on a kernel that supports the container that you choose to use. for example, if using Docker with LXC this means any Linux distro with a kernel version of 3.8 or later.

A Brief History of Containers

Before I wrap this up a quick history lesson.

We’ve been deploying software to containers since we started deploying software and we’ve had problems with them interacting as soon as we decided to put more than one piece of software in a container but the container has changed over the years.

The first container was the bare metal, true isolation as the first computers could only execute a single program at a time. You would show up at your scheduled time to run your program and hope that it finished in the time you had reserved.

This showing up on time requirement frustrated us so we invented the operating system so that we didn’t have to show up right on time. Operating systems also came with the added bonus of allowing us to interleave the execution of programs to make the best use of our physical resources.

The operating system served us well for a while but as software became increasingly complex we started to feel the pain of having a single machine be host to multiple programs. We were in a rush to get to production, so we invented the hypervisor to simulate several virtual machines on one real machine.

This also worked well for a while but eventually we realized that simulating hardware was an awfully high price to pay for not having to do program isolation correctly. So we went back to the trusty warhorse that is the OS and gave it the proper primitives to isolate programs from each other.


Multi-Tenant Containers Are An Anti-Pattern

Putting multiple applications in the same container is an anti-pattern, regardless of what the container is, but it’s ridiculous to suggest that we go back to a piece of hardware only having a single program on it. Ignoring the ridiculous cost of doing that, most of our applications are not a single program but a complicated set of interconnected programs.

Since we can’t go back to simpler times we should accept kernel level application isolation as a best practice for software application deployment.

In short, be a good guy not a scumbag and don't give your software roommates, because roommates suck!

Multicellular Economics and the Mechanization of Cooperation

Matthew Moore

Everything As A Service

It seems that you can get just about everything as a service today.

Need a software platform but don't want to build one? Heroku, dotCloud, Azure, and countless others have got you covered.

Ok with building your own software platform but not a datacenter? AWS, Rackspace, Linode, and a half-dozen more come readily to mind.

Need to mail or print something but hate licking stamps and unjamming the printer? Never fear, Lob is here!

Hell, you can even get employees as service with things like Mechanical Turk and CloudFlower.

All of these companies sell pretty distinct products but I would argue that at a more fundamental level they are all selling the same thing and they are not doing anything new, only imitating a pattern that someone else invented a very long time ago. By looking at the what the original inventor of this pattern has done we can gain valuable insight into what the future looks like.

Why is Ignorance Bliss?

Ignorance is bliss because knowledge is misery; let me explain.

When creating a system to solve a problem engineers, regardless of their field, are faced with two type of complexity: intrinsic and incidental.

Intrinsic complexity is complexity that is inherent and irremovable from the problem at hand. For example, when designing a system to remove oil from the earth there is no way for the engineers to remove the need to pass through several kilometers of earth to reach the oil.

Incidental complexity is complexity that is introduced by the implementation of the system solving the problem. Continuing with our oil example, all of the drilling equipment is an incidental complexity. The complexity of maintaining a drilling rig was not introduced until we used it to solve our original problem, getting oil out of the ground.

When I fill up my car with gas I don't want to be required to possess all the knowledge required to get that gas from its crude form into the highly combustible form I need it in. I'd rather be blissfully ignorant of that and only be required to understand my interface to that resource, the gas pump. I'll come back to this analogy later but for now let's circle back to those As-A-Service companies I mentioned earlier.

Selling Ignorance

The product they are really selling ignorance or rather they are selling you something that allows you to still accomplish your goal while being ignorant to the complexity of some part of that process. For example, AWS allows you to be ignorant to the ceomplexities of running a data center and Lob ignorant to complexities of printing and mailing.

Functional Specialization

Functional specialization is something that all of us are familiar with even if we've never heard it called that. We interact with dozens of functional specialists everytime we leave home and you are probably one yourself.

The most obvious examples of people who are functional specialists are those who have their specialization in their job title. Bus drivers, auto mechanics, mailmen, airline pilots, music teachers, and software engineers are just a few examples of the many kinds of specialists that we have in our society.

We humans specialize because it is advantageous to do so. We accomplish far more as a collective of specialists than we ever could if everyone was a generalist. But before get too full of ourselves, we didn't invent this idea, we copied it from mother nature.


Cell potency is the terminology biologists use to describe the ability of cell to differentiate into a new, specialized, kind of cell. Cells with the highest potency are called totipotent and they can differentiate into all the different kinds of specialized cells and form very complicated organisms capable of doing far more interesting things than a single celled organism like a bacterium.

Just like the cells that differentiate and eventually form our bodies, we are totipotent members of society. Each one of us has the potential to differentiate, specialize, and fill a societal need. Humanity stole this idea from mother nature who had been using functional specialization for billions of years; multicellular organisms are the ultimate example of functional specialization.

We, humanity, reached the analog of cellular differentiation somewhere between 4000 B.C. and 5000 B.C. in Sumer, where we have the first records of the division of labor among specialists.


Morphogenesis, latin for "beginning of the shape", is the biological process that differentiated cells undergo to arrange themselves into the organs and structures that make up complicated organisms. All of the cells that specialize in transporting get together and form the circulatory system and all of the cells that specialize in data processing get together and form the nervous system.

Humanity imitated morphogenesis in the late 16th and early 17th century with the invention of the corporation. The most iconic example of this was the East India Company founded in 1600. Corporations gather together specialists into groups to be able to perform even more specialized tasks by allowing individuals to become hyper specialized. For example, a sailing specialist becomes a specialist at sailing between India and the Cape of Good Hope making trips faster and more profitable.

We have come a long way since the East India Company but I think we have only now just finished our "morphogenesis"; so what is next?


As an organism matures, the separate organs organs that are made of differentiated cells take a step back towards unification. A side effect of being a specialized cell or organ is that you depend on the other specialists to do what you no longer can for yourself. You need to be able to communicate things like, "I need more oxygen" so that you can get what you need.

To accomplish this communication you body has an incredibly complicated set of signals and feedback loops that communicate the state and needs of various parts of your body to all the other parts, a chemical code. Despite all the complexity of the system as a whole each cell need only understand the signals it needs to send and the ones it needs to respond to.


Back to the Gas Pump

The gas pump is my interface to a resource that I need to accomplish my goal, let's dig a little more into that interface.

First, it is standard. The gas pump interface is more or less the same whether you fuel up at Shell or BP.

Second, it is a simple abstraction that hides complexity. I don't need to understand how the oil is found and brought out of the ground, how it is refined, how it is transported, or how it is stored below ground before I pump it into my car.

The modern gas pump replaced station attendants that were necessary before because the older pumps weren't as simple or standardized and had no way to collect payment; the modern gas pump is better solution to the same problem. We've seen many examples of this interface improvement as a form of automation at the consumer level over the years with machines like the ATM, vending machine, and the telephone, but only now are we starting to see this kind of automation at the level of corporations.

API: The Digital Interchangeable Part

In 1801 Eli Whitney went before the US Congress with 10 guns he had manufactured using identical parts. He disassembled them, mixed all the components together in a pile and then reassembled them all into working firearms. Congress was so impressed with the demonstration that they ordered all US military equipment be standardized. The advantage is obvious, if a part of something breaks it can be replaced with another identically functioning part instead of having to discard the entire device.

The interface is all that matters to the user of an Application Programming Interface (API); they can be ignorant of the complex inner workings. We can swap out components behind that interface or even switch to an API from by an entirely different provider. As long as the interface remains constant we can be confident that the system will continue to work, just like Eli Whitney's guns.

APIs are the interchangeable part of the digital world and just like the interchangeable part drove mechanization of manufacturing the API will drive the mechanization of cooperation. The API will become the standard method for business to business cooperation.

Mechanical Cooperation

It used to be and often still is the case that when to organizations want to purchase a good or a service from another there is a time consuming manual process required to make sure that each party understands the others needs, when they are needed, and payment is arranged. This manual interaction, much like the station attendant, will soon be a thing of the past.

With increasing frequency companies are exposing their goods and services via APIs; codifying their business processes and increasing efficiency and speed. These APIs will evolve to be like the gas pump, small differences in the interface but not in very significant ways, minimizing the friction for businesses to switch from using solution provider to another. This decreased friction will encourage competition and foster innovation for the mechanisms behind these APIs.

Higher Function

Just like the cells in our bodies became more complicated and higher functioning by differentiating, organizing, and codifying communication so will humanity itself. The ability to handwave over entire processes and instead interface with an API will lower the barrier to entry to almost every market and allow for innovative new businesses to be created with ever increasing speed.

This has been going on in the world of software for sometime; one need only look at the sheer volume of new business being born and dying in Silicon Valley every day to see this. As software eats the world and APIs become the standard business interface this rapid innovation and commoditization of common problems will apply to every industry, no exceptions.

We will be able to conceive, design, and build things on a higher plane of thought because the details have been turned into building blocks that we need only snap together, remaining ignorant of their inner workings.

Buckle Up...

So brace yourselves, we are on the verge of another surge in technological innovation and like every technological revolution that has come before it won't be without its bumps. Entire categories of jobs will be destroyed and replaced by new ones.

..but Smile

It will end like all the other too, we'll make it through and be better for it.

I'll leave you with this quote:

"Everything should be an API." - Steve Van Roekel, FCC Director

Memento mori!

Matthew Moore

When Roman Generals won great victories they were honored with a victory parade. It is said that while the general was riding in the parade he was accompanied in his chariot by a single slave who had but one job, to remind the general of his own mortality and fallibility. Every so often, while basking in all of the glory of his victory the slave would lean in and whisper "Memento mori" in the generals ear, reminding the him that despite all the praise and good fortune he was experiencing he was still only human.

et nos mortales sumus nimis

As software engineers today it is all to easy to get caught up in all of our good fortune and let it all go to our head. We get E-mails and InMails from recruiters regularly. We work in an industry that allows us the luxury of working from wherever we like. We are showered in all sort of crazy benefits ranging from meals to laundry services; perks like these are unheard of in other industries. All on top of what most people would consider a generous salary and excellent job security.

You've worked hard...

Most of us have worked hard to accomplish what we've accomplished and I don't want to minimize that. And I'm not arguing that companies shouldn't be compensating engineers in this way. Competition for talent is fierce right now and this is how companies compete for the creme of the crop.

...but you're also lucky.

If you're like me (I think that most of you are), then you're working in software because you have a passion for it; everything else is just a perk. I often hear other engineers making fun of people who got degrees in fields like art, literature, and russian musical history; how could they possibly expect to get a job with a degree like that?! But how are those people really different from you and I in how they chose what they wanted to do? The only difference is that your passions happen to align a little better with the economic demands of our time.

So try not to let it go to your head and remember, Memento mori.

Production Systems and Rocket Science

Matthew Moore

Three guys sitting on top of 960,000 gallons of recently ignited fuel.

We've all seen the heroic spaceship pilot sequence in one form or another. The pilot is flying his damaged craft as it plummets towards an unavoidable and unyielding mass. The ship is either on fire or breaking apart (probably both), alarms are blinking and wailing for attention, and the shaky cam is turned up to eleven. But, despite what Hollywood would have us think, this is not the 23rd century and Scotty can not beam me up.

Our spacecraft are simultaneously far more delicate and more ham fisted then anything out of Star Trek. The Enterprise was powered by Dilithium crystals (a.k.a rocks), compared to the Saturn V which carries 16lbs of oxygen,hydrogen, and kerosene (a.k.a. scary flammable shit) for every 1lb of "stuff". It is because of things like this that real space exploration is not at all like the movies.

There are a few lessons here that we, as developers, can learn from how modern space agencies really deal with alarms during a flight:

  • Alarms are never ignored.
    • cost of false negative
    • annoying alarms
  • Responding to an alarm follows a procedure.
    • acknowledgement by on-site responder only
    • collaboration protocol
    • escalation protocol

Never Ignored

Alarms are never ignored during flight because cost of a false negative can be huge, they are never dismissed as noise (more on that later). To make sure that our selfish human interests align with addressing the alarm,it is designed to be annoying.

Cost of False Negative

Alarms on spacecraft are placed on any system whose function is deemed critical to the mission. We should do the same in our software.

Ask yourself what is the "mission" of your system? What are the components of that system whose malfunction could cause the mission to fail? If there is a component on that list that would not alarm if it began to fail, then you have some work to do.

There are plenty of OpenSource or Free (as in beer) tools to help you monitor your system.

Nagios - Graphite - Zenoss - NewRelic - Pingdom - statsd

Annoying Alarms

At the end of the day the people responding to these alarms are just that, people. They are susceptible to drowsiness, distraction, hunger, and all the other weaknesses of this mortal coil.

NASA does not make alarms that only alert a single time or use a polite chime and you shouldn't either. If you deem something important enough to trigger an alarm it should annoy you until you acknowledge it.

Using a tool like PagerDuty for this is a natural supplement to whatever monitoring solution(s) you employ.

Follows a Procedure

On-Site Responder Only

An alarm on a space flight is only acknowledged by the on-site responder. Mission Control might receive an alarm via telemetry and begin responding before an astronaut acknowledges the alarm, but they do not silence the alarms because they are not the on-site responder.

On-site responder takes on a funny meaning when dealing with software systems but we can think of it as the person taking responsibility for managing that alarm. If you have flaky WiFi because you're on a plane or have three minutes of battery life left don't acknowledge the alarm!

Collaboration Protocol

An astronaut rarely tackles to an alarm solo, they almost always collaborate with other individuals down on the ground. The first steps of this collaboration begins immediately after acknowledging the alarm. The astronaut informs Mission Control that they are responding to the alarm and Mission Control acknowledges this. From this point onward both parties keep an open channel of communication, constantly updating the other on their investigation and seeking consultation on potential actions.

While in software you may not always have to work with someone to resolve an issue you should still have a process for how those parties will communicate when they do need to. All problems can be made worse if people fail to communicate as they work to resolve them. You risk stepping on each others toes, duplicating effort, or neglecting something because you thought someone else was doing that.

If your team doesn't already use a group chat solution like HipChat or Campfire I'd highly recommend it.

Escalation Protocol

If something starts to go seriously wrong or the responder can't resolve the issue on their own, there exists a process for escalating the issue to those with more knowledge of the system experiencing the problem.

You should have the same process within your team or organization. Have a process for reaching out as an issue becomes increasingly critical or takes to long to resolve. This could be as simple as sending an email to your whole team, as drastic as paging an all hours Reliability Engineer or contacting a vendor for support. Regardless of what your escalation protocol is make sure that you team members know what it is and review it occasionally to make sure it still reflects how you want to handle alarms and problems.

Do What Works

At the end of the day though your software probably does not have the uptime requirements and cost of failure of a manned space flight’s software. Make reasonable choices about how much monitoring and logging you, and your team, need to feel confident in your system.

Starting a Blog...Again.

Matthew Moore

I'm going to try and start blogging again. I'm going to try and do a larger post every 2 weeks and a smaller post during the intervening week. We'll see how it goes.