Dockerize a Node.js service with MongoDB using Docker Compose

In a world of software where the speed of delivery, automation, reliability, continuous delivery, etc are of growing importance, a world which is seeing applications architected as independent micro-services, containerization is a must.

That is why we need to learn Docker. Docker will help you spin up containers (which behave like VMs but aren’t really that heavy) and run your app within it. This gives us the ability to port the container anywhere we want on any server we want, moving from local to production gets easy and less buggy as the environment of the container which is running the app does not change. In a RESTful application, front-end developers can avoid all the trouble of setting up the services on their local. All they gotta do is run one simple command docker-compose up and the service starts running.

Goals

In this article, we will see.

  • Creating a Docker file to dockerize a node service.
  • Tieing this service with other applications(MongoDB).
  • Starting the service along with dependent application using docker-compose.

Our App

Let’s quickly add a node.js app which talks to MongoDB. To make things simple and quick, I’ll be using an npm module I had made a while back express-mongo-crud. This will help me quickly add CRUD APIs based on mongoose model.

npm i express express-mongo-crud --save

app.js

var express = require('express');
var app = express();
var mongoose = require('mongoose');
var bodyParser = require('body-parser');
var PORT = 3000;

// REQUIRE MIDDLEWARE
var instantMongoCrud = require('express-mongo-crud'); // require the module

mongoose.connect('localhost:27017/mongocrud');

var options = { //specify options
    host: `localhost:${PORT}`
}

//USE AS MIDDLEWARE
app.use(bodyParser.json()); // add body parser
app.use(instantMongoCrud(options)); // use as middleware

app.listen(PORT, ()=>{
    console.log('started');
});

MODELS/USER.MODEL.JS

module.exports = function(mongoose){
    return [{
    name: {type: String, required: true},
    email: {type: String}
}, {
    timestamps: true,
    createdby: true,
    updatedby: true
}]
};

Considering you have MongoDB installed and running, by running node app.js should start the app.

Creating a Dockerfile

To containerize our service we need a Dockerfile at the very least. Dockerfile contains a set of instructions which enable Docker to run through them and create an image of our application. This image will then be used to spin up our container.

Images are immutable files, where are containers are running environment created from image files. We can consider containers as instances of Images.

touch Dockerfile

First, we need to specify a reference image to build from. We shall use the latest LTS version boron of node from Docker Hub.

FROM node:boron

Next, we will specify the working directory with our container.

WORKDIR /usr/src/app

Next, copy the package.json into the Working directory.

COPY package.json /usr/src/app

COPY package-lock.json /usr/src/app

Install all the dependencies, we can run commands within the container using RUN.

RUN npm install

If you have any more dependencies to install you can do it after this. I use pm2 to manage my process within the container.

RUN npm install pm2 -g

Now we can copy our application code into the working directory.

COPY . /usr/src/app

Our app runs on port 3000, so we will use EXPOSE to bind it to the docker container.

EXPOSE 3000

Finally, we will need to start our app using the CMD command.

CMD ["pm2-docker", "start", "process.json"]

I’m using pm2, but it could be as simple as CMD ["node", "app.js"]. Don’t worry about the process.json file, for now, we will come to it later.

Our final Dockerfile would look like this.

Dockerfile

FROM node:boron

WORKDIR /usr/src/app

COPY package.json /usr/src/app

COPY package-lock.json /usr/src/app

RUN npm install

RUN npm install pm2 -g

COPY . /usr/src/app

EXPOSE 3000

CMD ["pm2-docker", "start", "process.json"]

Now, remember when we added the statement to copy our app code into the working directory. There is a slight problem with that. It would copy all the unwanted things like node_modules into the container. To solve this problem we need a .dockerignore file.

Create a .dockerignore file within the same directory as Dockerfile and add the following.

node_modules
npm-debug.log

At this point, we can build our image and run it. But in real applications services do not run on their own, they are dependent on other things like databases, event queues or in a memory store, etc.

In our case, we need MongoDB to connect to. One option would have been to bundle MongoDB within the same container. But this is not a good practice to do. The reason being that different environments would have different data. For eg, we would have a lot of dummy data for testing in local which obviously we do not need in production. Also, service layer and databases would be load-balanced and replicated respectively but separately, so having our application and database in the same container would give us problems.

On one hand having everything in the same container is a problem, but on the other hand, if your service has many dependencies managing these dependencies and linking them would become cumbersome. Comes Docker-Compose.

Docker-compose is a tool for defining and running multi-container docker applications with ease. Next, we shall see how we can use Docker-Compose to run the container we defined in our Dockerfile, furthermore, we will also spin up a MongoDB container and link it to out service container.

Now, let’s start with our docker-compose.yml. In the same directory where you have Dockerfile create a docker-compose.yml

touch docker-compose.yml

Next, specify the version of docker-compose to use. We will go with version 2.

version: '2'

Now it’s time to add our services.

# Define the services/containers to be run
services:
myapp: #name of your service
build: ./ # specify the directory of the Dockerfile
ports:
- "3000:3000" #specify ports forwarding
links:
- database # link this service to the database service
volumes:
- .:/usr/src/app
depends_on:
- database

Under services, we can include as many services as we like. The build command takes the directory where our docker file sits. Using ports we can map our host machine’s ports to container. This will help us access the port from our Host Machine. volumes helps us sync/map our local working directory to Docker containers directory, this is particularly useful for development environments and things like setting up hot-reload.

We have linked our myapp service to database and also made it dependent on it. Using link we are enabling communication between the two containers and using depends_on we make sure that database is started before myapp.

That being said, we haven’t yet defined our database service in docker-compose.yml. Let’s go ahead and do just that. Add the below on the same level as myapp.

database: # name of the service
image: mongo # specify image to build container from

Notice, here we are building our container from an image, on contrary to what we did for our node.js service, where we are building a container form a Dockerfile.

And that’s about it. Let’s have a look at the complete docker-compose.yml.

DOCKER-COMPOSE.YML

version: '2'

# Define the services/containers to be run
services:
  myapp: #name of your service
    build: ./ # specify the directory of the Dockerfile
    ports:
      - "3000:3000" #specify ports forwarding
    links:
      - database # link this service to the database service
    volumes:
      - .:/usr/src/app
    depends_on:
      - database
     
  database: # name of the service
    image: mongo # specify image to build container from

Now, remember we mentioned process.json, this file will have configurations for pm2. We can also set environment variables using this or we can use Docker to do it.

PROCESS.JSON

{
    "apps" : [{
      "name"        : "myapp",
      "script"      : "app.js",
      "args"        : [],
      "log_date_format" : "YYYY-MM-DD HH:mm Z",
      "watch"       : false,
      "node_args"   : "",
      "merge_logs"  : false,
      "cwd"         : "./",
      "exec_mode"  : "cluster",
      "instances": "1",
      "env": {
          "NODE_ENV": "local"
      }
    }]
}

A small problem?

Everything is set and ready to go, is it? There is one small problem, now that our database is running within our container, connecting to MongoDB(mongoose.connect('localhost:27017/mongocrud');) or even any other database using localhost won’t work.
We also cannot put the containers IP in that place, because Docker assigns IPs dynamically when we restart the containers.
Docker can resolve this using internal naming. In our docker-compose.yml we have added database as a name to our MongoDB service. Now, we can use this as a reference while connecting to MongoDB. So in our app.js we can change mongoose.connect('localhost:27017/mongocrud'); to mongoose.connect('database:27017/mongocrud');.

Running the Service

We can up our containers with the below command.

docker-compose up --build

This should start up both of our containers. Our services would be accessible on localhost:3000. And since we are using express-mongo-crud we can access Swagger API documentation on localhost:3000/apidoc.

Conclusion

Now we know how we can build a Docker image and also spin up dependent containers using docker-compose. This article explained how we can set up our Node.js service to work with MongoDB within Docker containers, but this example can be used to containerize services written in other languages or dependent on other databases.

Source Code

Facebook
Twitter
LinkedIn
Pinterest

Table of Contents

Related posts