Automating Node.js Deployment with Docker and Docker Compose on AWS EC2

Automating Node.js Deployment with Docker and Docker Compose on AWS EC2
nodejs
Docker
Docker compose

Junius L

September 24, 2023

6 min read

Introduction

In today's fast-paced world of software development, automating the deployment process is crucial for delivering reliable and scalable applications. Docker and Docker Compose provide an excellent solution for packaging applications and their dependencies, making it easier to deploy them consistently across different environments. In this blog post, we will explore how to automate Node.js deployment using Docker and Docker Compose on an AWS EC2 instance.

Creating an EC2 Instance

Start by creating an EC2 instance on AWS. Follow these steps:

  1. Log in to AWS Console: Sign in to your AWS Management Console using your credentials.

  2. Launch Instance: Navigate to the EC2 dashboard and click on the "Launch Instance" button.

  3. Choose an Amazon Machine Image (AMI): Select an appropriate AMI for your EC2 instance. Ensure that the AMI is compatible with your application's requirements. For Node.js, a standard Amazon Linux 2 AMI should work well.

  4. Choose an Instance Type: Select the instance type that suits your application's needs. A t2.micro instance is a good starting point for testing and small applications.

  5. Select an Existing Key Pair or Create a New Key Pair: If you don't have an existing key pair, create a new one. This key pair will allow you to SSH into your EC2 instance securely.

  6. Configure Security Group: Create or select a security group that allows incoming traffic on port 22 (SSH) for remote access, port 80 (HTTP) and any other ports required by your application. Make sure to allow traffic from your IP address.

  7. User Data Script

When launching your EC2 instance, you can provide user data to run initialization scripts. In your case, you want to set up Docker and other necessary tools. Use the following user data script:

#!/bin/bash
# SSM user didn't start in the home directory, so go there
cd
sudo yum update -y
sudo yum install docker git -y
sleep 1
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sleep 1
sudo chmod +x /usr/local/bin/docker-compose
sleep 5
systemctl enable docker.service --now
sudo chmod 666 /var/run/docker.sock

Preparing Your Node.js Application

Make sure your Node.js application is ready for deployment. You should have a Dockerfile in your project directory that specifies how your Node.js app should be containerized.

Here's a simple example of a Dockerfile for a Node.js application:

FROM node:16-alpine3.18
 
WORKDIR /app
 
COPY package*.json .
 
RUN npm ci
 
COPY . .
 
EXPOSE 8083
# for fastify, remove if you are not using fastify
ENV ADDRESS=0.0.0.0 PORT=8083
 
CMD [ "npm", "start" ]

Using Docker Compose

Create a docker-compose.yml file in your project directory to define how your Node.js application and any necessary services should run together. Here's a simple example:

version: "3.8"
services:
  nginx:
    container_name: nginx
    image: nginx:latest
    ports:
      - "80:80"
    volumes:
       - ./nginx.conf:/etc/nginx/nginx.conf
  api:
    restart: always
    depends_on:
      - nginx
    build:
      context: https://github.com/julekgwa/cibe.git#main
      dockerfile: Dockerfile
    ports:
      - "8083-8085:8083"
    environment:
      MONGODB_URI: mongodb://db/movies
  db:
    restart: always
    image: mongo
    volumes:
      - movies:/data/db
 
volumes:
  movies:

Setting Up Load Balancing with Nginx

Setting up load balancing with Nginx involves configuring Nginx as a reverse proxy and defining the backend servers you want to distribute traffic to. Here's a simplified example of an Nginx configuration:

events {}
http {
  include /etc/nginx/conf.d/*.conf;
 
  upstream appservers {
    server api:8083;
    server api:8084;
    server api:8085;
  }
 
  server {
          listen       80 default_server;
          listen       [::]:80 default_server;
          server_name  _;
          root         /usr/share/nginx/html;
 
          # Load configuration files for the default server block.
          include /etc/nginx/default.d/*.conf;
 
    location / {
        client_max_body_size 100M;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $scheme;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header Host $http_host;
          proxy_pass http://appservers;
          proxy_redirect off;
    }
 
  }
}

Create a New User for Deployment

It's essential to create a separate user for deployment to enhance security and restrict access. Let's create a new user named ciuser without password-based login:

sudo useradd -m -d /home/ciuser -s /bin/bash ciuser

Next, generate an SSH key for the "ciuser" user to allow secure logins:

ssh-keygen -m PEM -t rsa -f .ssh/ciuser

Now, create a .ssh directory for the ciuser user

sudo mkdir /home/ciuser/.ssh

add public key to authorized_keys inside /home/ciuser/.ssh/

copy the public key from the following command

cat ~/.ssh/ciuser.pub

and paste it in here

sudo vi /home/ciuser/.ssh/authorized_keys

set the correct permissions:

sudo chown -R ciuser:ciuser /home/ciuser

Configure Docker Commands for the Non-Root User

By default, Docker commands require root privileges. To allow the ciuser user to use Docker without sudo, add the following lines to the sudoers file:

echo "ciuser  ALL=(ALL)  NOPASSWD: /usr/bin/docker" | sudo tee -a /etc/sudoers
echo "alias docker=\"sudo /usr/bin/docker\"" | sudo tee -a /home/ciuser/.bash_profile

Using GitHub Actions to Deploy with Docker Compose on EC2

Configuring GitHub Secrets

To securely store your AWS access credentials for GitHub Actions, you can use GitHub Secrets. Follow these steps to add your credentials as secrets to your GitHub repository:

  1. Go to your GitHub repository.

  2. Click on "Settings" in the repository's menu.

  3. Select "Secrets" from the left sidebar.

  4. Click "New repository secret."

  5. Add the following secrets:

    • USER_NAME: The deployment user. e.g. ciuser, we've created this user earlier.
    • SSH_HOST: The public IP address of your EC2 instance.
    • SSH_PRIVATE_KEY: Your private SSH key for accessing the EC2 instance. Use the following command to get the private key sudo cat ~/.ssh/ciuser

Creating a GitHub Actions Workflow

Now, let's create a GitHub Actions workflow that will automate the deployment process. Create a file named .github/workflows/deploy.yml in your repository with the following content:

name: Deploy
 
on:
  push:
    branches: [ main ]
 
jobs:
  Deploy:
    name: Deploy to EC2
    runs-on: ubuntu-latest
 
    steps:
      - uses: actions/checkout@v2
      - name: Build & Deploy
        env:
            PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}
            HOSTNAME: ${{secrets.SSH_HOST}}
            USER_NAME: ${{secrets.USER_NAME}}
 
        run: |
          echo "$PRIVATE_KEY" > private_key && chmod 600 private_key
          ssh -o StrictHostKeyChecking=no -i private_key ${USER_NAME}@${HOSTNAME} '
 
              # Now we have got the access of EC2 and we will start the deploy .
              docker-compose up --scale api=3 -d --build --force-recreate
              '

Source code

https://github.com/julekgwa/cibe

Conclusion

By configuring GitHub Actions to SSH into your EC2 instance and run docker-compose up -d, you've created an automated deployment pipeline for your application. Now, whenever you push changes to your main branch, GitHub Actions will handle the deployment process, ensuring that your application is always up to date and running on your EC2 instance.