Introduction
Microservices is currently one of the hottest topics in the enterprise IT space—and for very good reasons. They offer IT and the businesses they serve many advantages, notably the ability to make small changes to code quickly and efficiently, and with reduced risk. Another big advantage: Services are organized around business capabilities, and lend themselves smoothly to continuous delivery.
Microservices describes a software architecture built around small, decoupled processes that communicate through language-agnostic APIs. Although the services themselves are relatively easy to use, enterprises—especially those looking to transition existing monolithic applications to a microservice architecture—need to realize that microservices do not magically make complexity “disappear.”
Rather, the complexity of application delivery and management shifts from being a problem within a single, large application to being a challenge of tracking and visualizing the external relationship between multiple services. Handling complicated, monolithic applications is replaced by the need to deal with a complex network of distributed, comparatively simple components that must work together.
Almost daily, the microservices landscape is being influenced by the impact of mobile and Internet of Things (IoT), which increase the pressures to improve and accelerate software delivery. To respond to this pressure, enterprises need to adopt new delivery models, release strategies, and tools that can handle many services and their dependencies.
Large, monolithic applications—in which release cycles are long and many complex features are packed into each release—will always have their place, but the software development landscape is shifting, presenting new opportunities and challenges.
Challenge 1: How to Handle Changing Deployment Patterns
How can your enterprise move away from monolithic applications that require tens or even hundreds of steps to deploy their many components towards managing a much larger set of individually simpler components?
Workflow tools, which map out what needs to happen step by step, are poorly suited to this challenge because they’re designed to string together a complicated set of steps for one large deployment. These tools are generally a poor fit for the task of handling many small deployments, dependencies between components, or concurrent releases.
Delivering an individual service or container with one click doesn’t help much if you can’t easily coordinate multiple services—especially if you need to have compatible versions of many different services running together to make an end-to-end use case work. Equally, you need to have some flexibility in the choice of services to be deployed, so that a single test failure in one service does not immediately bring your delivery process to a halt, preventing all other services in the same delivery pipeline from going live.
To make the most of microservices architectures, you need a single overview of your deployment dependencies and configuration. This allows you to offer the speed and efficiency benefits best suited to continuous delivery pipelines.
Challenge 2: How to Deal with Dependencies Between Services
In a microservices environment, individual services may have numerous dependencies. As architectures become more complex and include more components, it becomes impossible for IT staff to track all the dependencies and prevent conflicts without automation. If a dependency is missed, the system will fail somewhere in a potentially long sequence of microservice calls, making it difficult to figure out why or to track down the source of the issue.
In this scenario, coordinating the multiple delivery pipelines for different services is quite difficult. Inevitably, you end up with a network of pipelines that must be able to share information for a seamless path to production.
There will still be plenty of manual tasks to handle as you approach production, so it’s very important that your tooling can support manual activities. However, remember that automation will always reduce error and increase efficiency.
Challenge 3: How to Avoid Getting Stuck with the Wrong Implementation Technology
Currently, there’s a lot of buzz around container technologies, particularly Docker, which is a great solution for many situations. However, who knows what lies ahead? You can implement microservices with technologies other than containers.
Committing to a new technology is always a gamble. If the market changes or the company behind the technology you have chosen goes bust, you will be left in the ditch. It makes sense to ensure your tools and your process work seamlessly regardless of the underlying implementation technologies.
With the market evolving so quickly, it’s wise to keep your implementation options open and flexible until you can settle on a solution that works for your enterprise.
In Summary
Automating, visualizing, and coordinating changes to a complex network of interdependent services is the major challenge of delivering microservices applications.
If the complexity grows to the point where people can no longer manage it, enterprises need to use deployment and release tooling that can manage the numerous relationships between components. Typically, that means embracing a delivery model that is designed to handle dependencies.
About the Author
Andrew Phillips is the VP of DevOps Strategy for XebiaLabs, the leading provider of software for Continuous Delivery and DevOps. The author is a cloud, service delivery, and automation expert and has been part of the shift to more automated application delivery platforms. He contributes to a number of open source projects including Apache jclouds, the leading cloud library; and is a co-organizer of the DynamicInfraDays container community events. He is also a frequent contributor to DevOps Magazine, DZone, InfoQ, CM Crossroads and SD Times.