Junius L
September 24, 2023
6 min read
Junius L
September 24, 2023
6 min read
In today's fast-paced world of software development, automating the deployment process is crucial for delivering reliable and scalable applications. Docker and Docker Compose provide an excellent solution for packaging applications and their dependencies, making it easier to deploy them consistently across different environments. In this blog post, we will explore how to automate Node.js deployment using Docker and Docker Compose on an AWS EC2 instance.
Start by creating an EC2 instance on AWS. Follow these steps:
Log in to AWS Console: Sign in to your AWS Management Console using your credentials.
Launch Instance: Navigate to the EC2 dashboard and click on the "Launch Instance" button.
Choose an Amazon Machine Image (AMI): Select an appropriate AMI for your EC2 instance. Ensure that the AMI is compatible with your application's requirements. For Node.js, a standard Amazon Linux 2 AMI should work well.
Choose an Instance Type: Select the instance type that suits your application's needs. A t2.micro instance is a good starting point for testing and small applications.
Select an Existing Key Pair or Create a New Key Pair: If you don't have an existing key pair, create a new one. This key pair will allow you to SSH into your EC2 instance securely.
Configure Security Group: Create or select a security group that allows incoming traffic on port 22 (SSH) for remote access, port 80 (HTTP) and any other ports required by your application. Make sure to allow traffic from your IP address.
User Data Script
When launching your EC2 instance, you can provide user data to run initialization scripts. In your case, you want to set up Docker and other necessary tools. Use the following user data script:
#!/bin/bash
# SSM user didn't start in the home directory, so go there
cd
sudo yum update -y
sudo yum install docker git -y
sleep 1
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sleep 1
sudo chmod +x /usr/local/bin/docker-compose
sleep 5
systemctl enable docker.service --now
sudo chmod 666 /var/run/docker.sockMake sure your Node.js application is ready for deployment. You should have a Dockerfile in your project directory that specifies how your Node.js app should be containerized.
Here's a simple example of a Dockerfile for a Node.js application:
FROM node:16-alpine3.18
WORKDIR /app
COPY package*.json .
RUN npm ci
COPY . .
EXPOSE 8083
# for fastify, remove if you are not using fastify
ENV ADDRESS=0.0.0.0 PORT=8083
CMD [ "npm", "start" ]Create a docker-compose.yml file in your project directory to define how your Node.js application and any necessary services should run together. Here's a simple example:
version: "3.8"
services:
nginx:
container_name: nginx
image: nginx:latest
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
api:
restart: always
depends_on:
- nginx
build:
context: https://github.com/julekgwa/cibe.git#main
dockerfile: Dockerfile
ports:
- "8083-8085:8083"
environment:
MONGODB_URI: mongodb://db/movies
db:
restart: always
image: mongo
volumes:
- movies:/data/db
volumes:
movies:Setting up load balancing with Nginx involves configuring Nginx as a reverse proxy and defining the backend servers you want to distribute traffic to. Here's a simplified example of an Nginx configuration:
events {}
http {
include /etc/nginx/conf.d/*.conf;
upstream appservers {
server api:8083;
server api:8084;
server api:8085;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
client_max_body_size 100M;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://appservers;
proxy_redirect off;
}
}
}It's essential to create a separate user for deployment to enhance security and restrict access. Let's create a new user named ciuser without password-based login:
sudo useradd -m -d /home/ciuser -s /bin/bash ciuserNext, generate an SSH key for the "ciuser" user to allow secure logins:
ssh-keygen -m PEM -t rsa -f .ssh/ciuserNow, create a .ssh directory for the ciuser user
sudo mkdir /home/ciuser/.sshadd public key to authorized_keys inside /home/ciuser/.ssh/
copy the public key from the following command
cat ~/.ssh/ciuser.puband paste it in here
sudo vi /home/ciuser/.ssh/authorized_keysset the correct permissions:
sudo chown -R ciuser:ciuser /home/ciuserBy default, Docker commands require root privileges. To allow the ciuser user to use Docker without sudo, add the following lines to the sudoers file:
echo "ciuser ALL=(ALL) NOPASSWD: /usr/bin/docker" | sudo tee -a /etc/sudoers
echo "alias docker=\"sudo /usr/bin/docker\"" | sudo tee -a /home/ciuser/.bash_profileTo securely store your AWS access credentials for GitHub Actions, you can use GitHub Secrets. Follow these steps to add your credentials as secrets to your GitHub repository:
Go to your GitHub repository.
Click on "Settings" in the repository's menu.
Select "Secrets" from the left sidebar.
Click "New repository secret."
Add the following secrets:
USER_NAME: The deployment user. e.g. ciuser, we've created this user earlier.SSH_HOST: The public IP address of your EC2 instance.SSH_PRIVATE_KEY: Your private SSH key for accessing the EC2 instance. Use the following command to get the private key sudo cat ~/.ssh/ciuserNow, let's create a GitHub Actions workflow that will automate the deployment process. Create a file named .github/workflows/deploy.yml in your repository with the following content:
name: Deploy
on:
push:
branches: [ main ]
jobs:
Deploy:
name: Deploy to EC2
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Build & Deploy
env:
PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}
HOSTNAME: ${{secrets.SSH_HOST}}
USER_NAME: ${{secrets.USER_NAME}}
run: |
echo "$PRIVATE_KEY" > private_key && chmod 600 private_key
ssh -o StrictHostKeyChecking=no -i private_key ${USER_NAME}@${HOSTNAME} '
# Now we have got the access of EC2 and we will start the deploy .
docker-compose up --scale api=3 -d --build --force-recreate
'https://github.com/julekgwa/cibe
By configuring GitHub Actions to SSH into your EC2 instance and run docker-compose up -d, you've created an automated deployment pipeline for your application. Now, whenever you push changes to your main branch, GitHub Actions will handle the deployment process, ensuring that your application is always up to date and running on your EC2 instance.