October 1, 2016
Hot Topics:

Building Microservices with Open Source Technologies

  • September 22, 2014
  • By Suresh Balla
  • Send Email »
  • More Articles »

by Suresh Balla


In today's world of IT industry, the speed with which new features are released wins the marketplace. The speed of transforming an idea (or feedback from a customer) from the inception phase to getting it out to customers is very critical. Traditional ways of the SDLC in developing software hinders this speed. There is a lot of friction from product development. In this article, we will take a look at how building a microservice helps us remove the friction from product development and enable the speed and availability of new features.

A Brief Introduction to Microservice Architecture

Most of the enterprise applications built in today's world are monolithic in nature, with a client-side user interface, a database or data access layer (with or without web services), and a server-side application. It is monolithic because it is a single logical executable. Any changes to the system would require building and deploying a new version of the system. Over a period of time, it is often realized that change cycles are tied together; this means that a single change in the system would require the entire monolith to be rebuilt and deployed. And, what is more annoying than if you decide to scale the system? It would be scaling the whole system rather than the parts of it that require greater resources. And, naturally, the solution to solve these issues is to build each feature of the system as individual components running on their own processes and communicating with light weight mechanisms. These individual components, called microservices, are part of the bigger system.

What Is Microservice Architecture?

Martin Fowler's definition of microservice architecture in his article at http://martinfowler.com/articles/microservices.html is:

"The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies."

Building a system with a suite of microservices would result in the following characteristics:

  • Inverse Conway's lay: Architecture should lead to organization of teams opposed to the traditional way of technology-specific teams leading to the architecture (front end team, business logic team, and data access teams leading to three-tier architecture). With this type of organization, teams would be cross functional with less dependency among them.
  • One "action" per single-function microservice: Each microservice will do the one, and only one, function that it is dedicated for.
  • Each microservice has its own build and build life cycle: Each microservice will have its own build and build life cycle that can independently be deployed and features can be added to whole system without affecting other parts.
  • Improves fault isolation: A memory leak in one service only affects that service. Other services will continue to handle requests normally.
  • Eliminates long-term commitment to the technology stack: A developer will be free to pick the languages and frameworks that are best suited for service.

Communications in Microservices Architecture

When we move from monolithic architecture to a microservice-based system, or build a microservices architecture-based system from scratch, one of the important design considerations is that communications from clients must move to different microservices. Because these services are small, independent processes, careful consideration of how clients, such as a desktop browser or mobile application, communicate with the services is very important.

Before we see different approaches, let us consider a case study and see how it can be implemented with different approaches.

Reporting Portals Case Study: A company that has raw data wants to process it and provide it as useful information to its customers; this data can help them make the right decisions at the right time. So, the company decided to build different reports and offer different features such as email subscriptions, SMS alerts for any unexpected data trends, and so forth.

A very simple approach to connect different features as microservices is to have clients like a desktop browser or mobile application communicate directly to microservices.

Figure 1: Clients talking to microservices directly

The architecture depicted in Figure 1 seems simple, but there is likely to be a significant mismatch between the APIs of individual services and data or functionality required by clients. A much better approach is to enable clients to talk to a new service, called the API gateway, that would act as an aggregator of different services and serve the client request.

Figure 2: API gateway between clients and microservices

In the architecture diagram depicted in Figure 2, the API gateways sit between the application clients and different microservices. They provide APIs that are tailored for different clients.

Open Source Ecosystem

Building a microservice with open source software will enable no procurement cycle and avoid the licensing costs of proprietary software of big enterprise vendors. In addition, the open source components that are available can be extended or tailored for business needs.

Process Versus Evented Systems

Before we move on to building microservices, I would like to explain how process-based and event-based (evented) systems work and the difference between them. Understanding the purpose and difference would enable us to choose the right implementation for building microservices. In process-based systems, a single process or thread is dedicated to serve a single request. If the request is about getting some data from a persistent store or some service, the thread will be blocked waiting for the input operation to complete. In event-based systems, there is one single process to serve a large number of requests and it is non-blocking in nature for all input operations. To summarize, here are characteristics of process-based and event-based systems:

Process-based Systems

  • Single process/thread per request
  • Blocked while waiting for I/O to complete
  • Use process/thread pools

Event-based Systems

  • Single process for large number of requests
  • Non-blocking for I/O
  • Use one process per core on system for scale

Process systems are a great choice for high computing power. Event-based systems are great for I/O bound processes. If your application business logic is based around managing data, event-based systems are a better choice for better throughput.

Building a Microservice with NodeJS and MongoDB

For the purpose of this article, I have chosen NodeJS as the platform to build a microservice for the following reasons:

  • The demo microservice is not computing intensive but I/O intensive. So, NodeJS, which is an event-based system, should be a better choice.
  • The number of open source components' contribution in NodeJS is very large in number. Figure 3 shows the snapshot of statistics, as on Sept 09, 2014 (from http://www.modulecounts.com/) that shows NPM, which is a package management module in the NodeJS ecosystem, has the highest number of components than any other package management modules in other technologies.

    Figure 3: Open source module data for different technology package managements

  • Evented all the way down—every node module and default client libraries of major systems, such as MySql, MongoDB, CouchDB, and so on are evented in nature by default.
  • It is very light weight and you can choose the components that you want as opposed to having one large framework.

Now, let us review the scheduler service that is built by using NodeJS for the purpose of demonstration. The functionality of the scheduler service is to act as a JOB scheduler and expose the functionality by using REST full service. As mentioned in the section "Communications in Microservices Architecture," the Reporting Portals Case Study can use this service to send out different reports to users on a recurrent schedule basis like daily, monthly, and so forth. Note that this can be classified as a microservice because it does the single job of JOB scheduler and is responsible for running the jobs as per request. It is backed by MongoDB for persistent storage.

I have hosted a scheduler service in Github at https://github.com/NeudesicIndia/scheduler-service. Feel free to clone/fork the repository to try it by yourself.

Solution Structure

Figure 4 shows the solution structure adopted for developing a Scheduler Microservice.

Figure 4: Solution structure of a scheduler microservice

The folder structure is set up in such a way that each folder represents a specific sub functionality (index.js) and is backed by additional JavaScript files if required.

  • config: One places the folder to have all configuration-related settings. It is exposed by index.js and backed by files such as devConfig.js that have specific settings for the developer environment
  • controllers: To have different controllers that expose the resources as a REST full service. schedulerController is the main entry point to get all JOB requests and hand them over to the JOB scheduler module.
  • data: Acts as a data access layer.
  • global: For all global events and setting up the initial infrastructure.
  • jobs: For job action to run when the scheduler invokes. Could be used to send emails or extract data from external sources and so on, depending on your business needs.
  • scheme: To define what a request for JSON should look like.
  • Dockefile: Used for Docker during the deployment in building containers.
  • gruntfile.js: Used by the grunt tool to enable development productivity tools or quality check tools.
  • package.json: Used by NPM tools to manage NodeJS packages.
  • server.js: Entry file for NodeJS.


The package.json file is used by NPM to manage packages. Code Segment 1 shows the package.json file defined and used for the scheduler microservice.

   "name": "SchedulerService",
   "version": "0.0.1",
   "description": "",
   "main": "server.js",
   "scripts": {
      "test": "echo \"Error: no test specified\" && exit 1"
   "keywords": [
   "author": "Suresh Balla",
   "license": "ISC",
   "dependencies": {
      "agenda": "^0.6.12",
      "body-parser": "^1.2.0",
      "express": "^4.3.1",
      "jsonschema": "^0.4.0",
      "mongoskin": "^1.4.4",
      "underscore": "^1.6.0",
      "winston": "^0.7.3",
      "winston-mongodb": "^0.4.5"
   "devDependencies": {
      "grunt": "^0.4.5",
      "grunt-nodemon": "^0.2.1"

Code Segment 1: The package.json file of the scheduler microservice

The Following modules are being used to fulfill the functionality of the scheduler service:

  • agenda: For a CRON-based JOB scheduler
  • body-parser: For parsing HTTP post JSON data
  • express: For building a REST full service
  • jsonschema: For defining and validating a JSON request
  • mongoskin: For interacting with MongoDB
  • underscore: For JavaScript utility functions
  • winston: For logging the infrastructure
  • winston-mongodb: For MongoDB transport of Winston logging infrastructure
  • dev dependencies: Using the Grunt tool nodemon that is useful for auto restarting the application when there is any change in the JavaScript file

Grunt Tools

Grunt tools are very useful for development activities. They have a wide range of tools for a developer's needs. The one that I personally prefer to use in NodeJS development projects is nodemon. This tool, when configured, restarts the NodeJS application if there are any changes in the JavaScript file. Apart from this, I also like jshint, a JavaScript code quality tool, for validating JavaScript files. Take a look at all the grunt tools available at http://gruntjs.com/plugins.

module.exports = function (grunt){
      nodemon: {
            script: 'server.js',
            options: {
               watchedExtensions: ['js']

   grunt.registerTask('default', ['nodemon'])

Code Segment 2: The grunt configuration file

Code Segment 2 shows the configuration file used by the grunt tool for the purpose of a scheduler microservice. I have configured it to use a tool called nodemon and this will watch for all JS file changes and restart the NodeJS application via the startup script server.js.

Apart from this, I would also recommend the JShint tool that is available at https://www.npmjs.org/package/grunt-contrib-jshint. This tool helps validate the JavaScript files and catch errors, such as using variables that are not defined. If we do not catch these kinds of errors at the very early stages, the cost of fixing them at later stages would be very high. Apart for this, it also has so many options that can be configured to look for different category errors and standards.

Deploying Microservice Using Docker

One of major advantages outlined in the preceding sections of using a microservice is that these services can have their own build cycles independently and scale at will; that requires greater resources than scaling a whole monolithic system. Docker is a light weight container technology that is catching up very fast in the world of Linux-based systems. It is an open platform for developers and system administrators to build, ship, and run distributed applications. In addition, Docker provides Docker Hub, a cloud service for sharing applications and automating workflows.

Docker works based on Linux CGroups to provide the required resource isolation (CPU, memory, block I/O, network, and the like) and is completely built by using Google's Go programming language. If you have a question in your mind about how it is different from traditional virtual machines, Figure 5 will clarify the difference.

Figure 5: Virtual machine versus a docker container

Figure 5 shows the difference between Virtual Machines running on Type 1- and Type 2-based Hypervisors versus the Docker Container. A virtual machine running on either Type 1- or Type 2-based Hypervisor includes not only the application and necessary binaries and libraries, but also the entire guest operating system. On the other side, the Docker container comprises just the application and its dependencies and runs on an isolated process in the userspace on the host operating system.

Now that we understand what Docker is and how it can be useful, let's build our NodeJS-based microservice as a Docker container and deploy it to different environments, such as staging and production.


As part of our NodeJS scheduler-service project structure, you will find a file called Dockerfile. The definition of this file is depicted in Code Segment 3.

 1.  # DOCKER-VERSION 0.10.0
 3.  # Pull base image.
 4.  FROM ubuntu:14.04
 6.  # Install Node.js
 7.  RUN apt-get update
 8.  RUN apt-get install -y software-properties-common
 9.  RUN add-apt-repository -y ppa:chris-lea/node.js
10.  RUN apt-get update
11.  RUN apt-get install -y nodejs
13.  ADD . /src
15.  RUN cd /src; npm install
17.  ENV PORT 3001
18.  ENV NODE_ENV development
20.  EXPOSE 3001
22.  #CMD ["node", "/src/server.js"]

Code Segment 3: Dockerfile for scripting a container configuration

This Dockerfile defines all the setup instructions to building a Docker container image. The FROM instruction says to use Ubuntu 14.01 as the based image. The next set of instructions, from Lines 7 to 11, are for installing NodeJS. Lines 13 and 15 are to copy the NodeJS files (in the directory where Dockerfile is present) to a new folder called src (within the image to be built) and do an "npm install" command by navigating into the src folder. This will install all packages defined in the package.json file. Lines 17 and 18 are for setting environment variables that could be used with the NodeJS application. Line 20 is to expose port 3001 of the Docker container. And, the last instruction, which is defined in Line 22, is to instruct what command is to be executed when the container is started. In our case, it is a node command to start the node application.

Steps for Building and Deploying the Scheduler Service

    1. Build the Docker container image: Run the following command where the Docker file is located.
sudo docker build -t <username>/<imagename>.
  1. Run the Docker container.
    sudo docker run -p 3001:3001 -d <username>/<imagename>
    This command maps internal port 3001 to external host port 3001 and the service would be listed on the external post of 3001.

Docker for Apple Mac and Windows

Docker is written in Go and makes use of Linux kernel features. So, Docker cannot be used on bare Apple Mac or Windows systems. But, because it works on virtualized environments, you can run Docker inside an EC2 instance, a Rackspace VM or VirtualBox, and the preferred way to use it on Mac or Windows is with Vagrant or Boot2Docker (around 25MB in size, a lightweight Linux distribution made specifically to run Docker containers).

Docker Hub

Docker hub is a cloud service for sharing applications and automating workflows. If you want to avoid building a container every time on different machines to run the service, you can build in the hub once and pull the built image to different services and run the services. Docker hub has very good integration with Github and Bitbucket. If your nodejs application git repo is maintained with either of these services, you can link your repo to the docker hub online service and issue a build command to build the image directly from the repo. With this Docker hub cloud service and leveraging automated workflow provided by Docker hub, we can achieve continuous integration with minimal effort.

Figure 6: The Docker hub build screen

The screenshot in Figure 6 shows how I linked and set up an automated build for the scheduler service. I just need to download the image by using the command "docker pull sureshballa/scheduler-service". For more information on linking Github or Bitbucket repositories to set up an automated build, please refer to https://docs.docker.com/docker-hub/builds/.

Scaling a Microservice Running in Docker Containers

Although Docker helps in deploying services reliably, it is not a full-blown software deployment system by itself. So, scaling microservices as a part of Docker containers is not possible with the Docker tool. But, this is not the end of the world; there are so many ways out there to achieve scaling.

Open Source PaaS Tools

Deis and Flynn are two open source platform-as-a-service projects that can be used to scale Docker containers.

Amazon Web Service

AWS Elastic Beanstalk has now support for Docker. With this, we will have the ability to test applications on our local Docker environment and deploy them to AWS and manage the instances running.


CoreOS is a server OS built from the ground up for the modern datacenter. CoreOS provides tools and guidance to ensure your platform is secure, reliable, and stays up to date. All applications are installed and run using Docker, managed by systemd.


In this article, we have seen the benefits of building microservices and how they help product teams in enabling speed and availability of new features. Also, we have seen how we can use open source technologies like NodeJS and Docker to build and deploy microservices.


Microservice article by Martin Fowler: http://martinfowler.com/articles/microservices.html

Data on a number of packages being release on each technology by their respective package management modules: http://www.modulecounts.com/

Grunt tools for development activities in NodeJS: http://gruntjs.com/plugins

Github and Bitbucket Integration to Docker Hub for automated builds: https://docs.docker.com/docker-hub/builds/

Tags: MongoDB, JSON, REST, API portal, Open Source App, nodeJS, microservice

Comment and Contribute


(Maximum characters: 1200). You have characters left.



Enterprise Development Update

Don't miss an article. Subscribe to our newsletter below.

Sitemap | Contact Us

Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel