One of the great strengths of containerisation is complete environment encapsulation.

Why Settled Uses Microservices, and Why You Should Be Too

By now, a lot has been written about microservices and we’re starting to discover some excellent usage patterns, and yet there are still developers starting new projects and building new applications using monolithic architectures. It’s time to change.

N.B. Scroll to the bottom for a list of handy resources on microservices.

Traditionally, applications were built from a single large codebase. This sounds simple in practice but lends itself to a myriad of problems as the application grows more complex. Monoliths like this eventually become unwieldy to maintain and improve, and with multiple points of failure they become brittle.

I was first initiated in the school of microservices in early 2016 when I joined prop-tech start-up Settled, and in particular a tool called Docker. Docker is an occasionally convoluted but extremely powerful tool for containerisation and this makes it a great fit for building microservices.

Like many organisations, Settled has a large PHP monolith that can be a challenge to work with and struggles to meet the evolving needs of the business.

As a team, our task was to carve out blocks of functionality from this monolith and rebuild it as a constellation of Node.js-based microservices.

The plan was to increase the pace of innovation in the tech team and reduce the amount of time spent on maintaining the tech stack. Ultimately the project was a huge success, and after a year of using Docker in production we can say that as a team, Settled recommends it wholeheartedly. The Docker team has been making big strides over the last year and whilst there is still a long way to go, it’s finally coming of age.

The big benefit of containerisation and why it’s a hot topic these days is environment encapsulation. In short, this means that everything inside a Docker container is completely separate from the outside environment, save for a few (very small) entry points that you define.

Encapsulation gives us a long list of benefits but does require a different way of thinking. Some may argue this is a disadvantage, but in fact it encourages developers to produce higher quality code by design, in the same way that functional programming enforces better code quality.

Building Microservices by Sam Newman is an excellent resource if you’re interested in getting your teeth into microservices in more depth.

Benefits of Microservices

Predictability

We define what goes into our containers at build time; from our application’s dependencies all the way down to the specific flavour and version of operating system we want. We know exactly how our application will behave no matter the machine it’s running on.

No more incompatibilities between developers with different versions of dependencies on their local boxes. No more differences between staging and production environments. No more time wasted fixing inconsequential problems.

Portability

Applications running in Docker take their entire environment with them. This makes them predictable but also makes them extremely portable because the environment we really care about is the one encapsulated within the container.

Docker containers can run on various flavours of Linux, Mac and Windows machines and a new server can be spun up and a Docker container quickly deployed without any fuss and zero manual setup.

Deployability

Using Continuous Integration we can setup a beautiful, automatic deployment workflow that takes all the pain out of pushing to production. At Settled when we merge a feature branch into our master branch, an automated testing environment is spun up on demand (we use Codeship), which runs our tests and builds the final Docker image. Incidentally, Codeship uses Docker itself to create on-demand encapsulated testing environments.

When all the tests have successfully passed we deploy the new Docker image to our servers using Docker Cloud by flicking a switch, and with our staging environment this redeploy happens automatically.

Containers can be moved and deployed anywhere.

Responsibility

Each individual microservice should have a Single Point of Responsibility. This decouples it from the greater application and makes it super easy to improve, remove or replace one particular aspect of the application without impacting the rest of it. Good luck doing that with a monolith!

Having a single responsibility also makes it clear to developers what the service should and should not be doing. Rather than adding tangential functionality to a loosely related area a new microservice should be created. In this way a constellation of microservices evolves and changes as the needs of the business change.

Scalability

A traditional monolith is easy to scale vertically by adding more resources to the server but not so easy to scale horizontally. Microservices on the other hand are easy to scale both vertically and horizontally because they are decoupled from all other services and each have a single responsibility. We can simply spin up more containers or servers and put a load balancer in front of them — job done.

This does require that each microservice is completely stateless. It’s not sensible to store data of any kind insider an individual container instance because killing, redeploying or even just restarting it can cause data loss. The environment inside the container is designed to be ephemeral and immutable, but with the flexibility of a temporary local file system.

Securability

A container runs its own operating system which can be locked down and customised as much as needed. It is completely separate from the host OS and the only entry points to the container are those you define at build time, and typically these cannot be modified once deployed.

If someone were to gain access to the container they would have a hard time breaking out to the host machine (or vice versa), and if they did they’d soon find they’re running as a user with no privileges. Not to mention they’d only have access to a limited portion of the overall application.

Agility

Containers typically take less than one second to boot up. That’s the entire operating system and your application. Docker achieves this by sharing the Kernel between the host operating system and the OS inside the container, so unlike a Virtual Machine there’s no waiting around. It just works, instantly.

Containers are blazingly fast compared to virtual machines.

Excellent Resources

These are a few of the resources I found useful when learning about microservices:

  • Microservices (Martin Fowler) (Article)
    A well written and comprehensive article on the general concepts behind the microservices architecture. This is a great introductory piece.
  • Building Microservices (Sam Newman) (Book)
    If you want a more in-depth introduction to microservices then this is the book to read.
  • Introduction to Microservices (Chris Richardson) (Blog Series)
    A detailed series of blog posts comparing microservices to monolithic applications aimed at users of NGINX.
  • Getting Started with Docker (Tutorial)
    If you’re new to microservices or Docker you can work through this simple tutorial to get up and running quickly.
One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.