Deploying Docker Swarm on AWS EC2 Instances

Gaurang Pawar
7 min readMay 12, 2021

--

What is Docker and why should you use it?

Docker is a popular open-source project written in go and developed by Dot cloud (A PaaS Company). It is basically a container engine that uses the Linux Kernel features like namespaces and control groups to create containers on top of an operating system. Containers can be anything from your applications(node, flask, Django, etc) to other third-party applications like databases, message queues, load balancers, etc. All containers are located inside the docker host which has its own separate network. Each container in this network has its own domain name and IP address by which other containers can talk with it.

Each container can be separately scaled anytime the user traffic increases. Docker allows developers to easily deploy their whole application stack on any platform.

What is container orchestration?

Container orchestration automates the scheduling, deployment, networking, scaling, health monitoring, and management of containers. Containers are complete applications; each one packaging the necessary application code, libraries, dependencies, and system tools to run on a variety of platforms and infrastructure.

You will usually have a lot of containers working in your application which includes all your microservices, databases, caches, message queues. For this, we use container orchestrators like Kubernetes or Docker swarm. Which could easily manage thousands of containers across multiple virtual machines.

In this article, we will deploy our docker swarm on multiple AWS EC2 instances. We will create a simple node application with an NGINX load balancer and deploy it on AWS.

Prerequisites:

  1. Docker installed on your local machine
  2. Basic knowledge of Docker
  3. An AWS account
  4. Basic knowledge of AWS Virtual Private Cloud

Let's start by creating our node application. A simple boring express app that just displays “hello world”.

const express = require('express')const app = express()app.get("/", (req, res)=>{res.send("hello world")})app.listen(3000, ()=>{console.log("Listening on port 3000..")})

Cool, now let's create a docker image of our app.

A Docker image is a read-only template that contains a set of instructions for creating a container that can run on the Docker platform. It provides a convenient way to package up applications and preconfigured server environments, which you can use for your own private use or share publicly with other Docker

We will first create a Dockerfile which will have all the instructions to create our image.

FROM node:14WORKDIR /appCOPY package.json .RUN npm installCOPY . ./EXPOSE 3000CMD ["node", "index.js"]

We also need a .dockerignore file where we can mention the things we don't want to put in our container. The things you don't want to put in your container can be your git files, Dockerfile, docker-compose file. Basically, anything which is not required to run your code. We usually do this to decrease the size of our image.

node_modulesDockerfile

You can see I also included node_modules in .dockerignore that's because in our Dockerfile we telling docker to run the “npm install” command which will get all the modules.

Now lets create an image by following command

docker build -t myapp .

This will create our image with the name myapp. You can check if the image is created or not by using the following command.

docker image ls

Now we’ll push this image to the Docker hub. For that, we’ll create a new repository on the Docker hub.

Docker hub only allows you to push the image if the image name matches the repository name which in this case is “gaurang98671/myapp”. So we’ll change the name of our image. For that use the following command.

 docker tag myapp gaurang98671/myapp

Use the following command to push our image to the Docker hub.

docker push gaurang98671/myapp

To check weather the image is successfully pushed to Docker hub, go to your repository and look into the Tags and Scans section, you will notice that the image has been pushed few seconds ago.

Now we need a docker-compose.yaml file and an NGINX configuration file. docker-compose.yaml file is where we mention all our required services and their configurations.

docker-compose.yaml

version: "3"services:
nginx:
image: nginx
ports:
- "8080:80"
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf:ro
depends_on:
- node-app-container
node-app-container:
image: gaurang98671/myapp
deploy:
- replicas: 5

Note that here we are mentioning that we want 5 containers of our node application. We can scale it to more if user traffic increases.

default.conf

server {listen 80;location / {    proxy_set_header X-Real-IP $remote_addr;    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;    proxy_set_header Host $http_host;    proxy_set_header X-Nginx-Proxy true;    proxy_pass http://node-app-container:3000;    proxy_redirect off;         }}

In NGINX configuration we are redirecting all the traffic to node-app-container:3000. Docker containers has their own domain name so we dont need to know their IP addresses.

We’ll only need our docker-compose.yaml and NGINX configuration for deploying our application on ec2 instances. You can either manually create these two files on the instance or you can push them on GitHub and clone it from your production environment. Pushing to GitHub is always a better option. Just make sure you don't put any database passwords or client secrets on it.

Now we have everything we need to deploy our application.

Login to your AWS account. And create a VPC or else you can use your default VPC. I am using my dafault VPC which has 3 subnets.

I will create 3 ec2 instances, 1 in each subnet. You can create all of your instances in one subnet if you wish to.

We will make one of the three instance as manager node and other two instance as worker nodes. Manager node maintains cluster state, scheduling service, serving swarm mode HTTP API endpoint.

Worker nodes are also instances of Docker Engine whose sole purpose is to execute containers. Worker nodes don’t participate in the Raft distributed state, make scheduling decisions, or serve the swarm mode HTTP API.

Kepp in mind that manager node is also a worker node but with extra permissions, so containers will also be deployed on manager node.

Now SSH into all three instances.

Enter these two commands on all three instances

sudo apt-get update
sudo apt install docker.io

Find private IP address of first node which will be your manager node

ifconfig 

Now make your first node as manager node by following command

sudo docker swarm init --advertise-addr "'your private IP':2377"

This will give you a command which you just need to copy paste on your worker nodes.

Now you have joined swarm as worker node.

We’ll now need our docker-compose.yaml and NGINX configuration. Which you can just clone from your github repository or you could just clone it from my repository https://github.com/gaurang98671/docker-swarm-medium

Following command lists all the nodes in swarm.

sudo docker node ls

Now to create your containers use following command on your manager node. Make sure you are inside directory which you just cloned.

sudo docker stack deploy -c docker-compose.yaml myapp

Wait for docker to create containers.

Now use following command to see all the services in your swarm

sudo docker service ls

We can see here all our services are created. We have 1 NGINX container which is our load balancer and 5 node-app-containers.

Docker swarm distributed all our containers across all three ec2 instances. To see containers in an instance use follow command in that instance.

sudo docker ps

Now lets see if our boring application actually works or not. Open your browser and type the public IP address of first instance and NGINX port. It should look something like this.

http://3.210.205.171:8080

And it works😎😎

NGINX will automatically load balance all incoming requests across all 5 node-app-containers. You can scale those container upto any number any time the user traffic increases. You can push changes to your docker hub repo when you want to change something. Docker makes it so easy to deploy and ship you applications. Thats it for now, hope this article was helpful.

Goodbye👋👋👋

--

--