So far, in this series of articles, we’ve been using Docker to run various services our NestJS application depends on. Some good examples are PostgreSQL and Redis. We can take it a step further and put our NestJS application into a Docker container as well. It can be very useful when looking into deploying our app.
You can find the code from this article in this repository.
Building the Docker image
We need to start by building a Docker image for our NestJS application. A Docker image acts as a template that describes everything our application needs to run. For example, it includes the source code and its JavaScript dependencies. Besides the above, it also contains information about the desired operating system and its components, such as the Node.js version.
There are a lot of Docker images ready to use and maintained by the community. We can find them on Docker Hub. A good example is node:alpine, based on the lightweight Alpine Linux distribution that comes with Node.js preinstalled.
To define a Docker image, we need to create the Dockerfile. It’s a text file containing all the information about the image. When building a Docker image for our application, we can base it on an existing one. To do that, we need to start our Dockerfile with the FROM keyword and specify the parent image from which we’re building our new image.
1FROM node:18-alpineWe can now specify additional instructions to make sure our NestJS application will be able to run correctly. By default, Docker will run all of the following commands in the root directory. To keep our Docker container tidy, we can use the WORKSPACE instruction to define the working directory for all the following commands.
1FROM node:18-alpine
2
3WORKDIR /user/src/appSo far, our Docker image does not contain the source code of our NestJS application. Let’s add it and install the dependencies.
1FROM node:18-alpine
2
3WORKDIR /user/src/app
4
5COPY . .
6
7RUN npm ci --omit=devnpm ci is an alternative for npm install meant for automated environments. By running it with the --omit=dev flag we avoid installing packages listed in the devDependencies to achieve a smaller Docker image. We need to make sure that we list all of the packages the image needs such as @nestjs/cli in the dependencies section.
By running COPY . ., we copy all of the files from our application to the /user/src/app directory in our Docker image. Then, we execute RUN npm ci to install the necessary JavaScript dependencies.
Once the above process is ready, we can build and start our NestJS application. Our complete Dockerfile looks like that:
1FROM node:18-alpine
2
3WORKDIR /user/src/app
4
5COPY . .
6
7RUN npm ci --omit=dev
8
9RUN npm run build
10
11USER node
12
13CMD ["npm", "run", "start:prod"]We run USER node to avoid running our application as root for security reasons. It might also be a good idea to change the owner of the files we’ve copied before.
It’s crucial to understand the difference between the RUN and CMD instructions. The RUN command defines the image build step and will be executed when building the image before starting the application. Therefore, we can have multiple RUN instructions in a single Dockerfile.
The CMD instruction does not run when building the image. Instead, Docker executes it once when running the Docker container created based on our image. Therefore, using more than one CMD instruction causes the last one to override the previous CMD commands.
Ignoring some of the files in our application
By default, COPY . . copies all of the files from the directory of our application into the Docker image. However, we need to remember that there are some files we shouldn’t put into our Docker image. To define them, we need to create the .dockerignore file.
1node_modules
2dist
3.git
4.env
5docker.envBuilding the Docker image
Once we’ve created our Dockerfile, we need to build our Docker image.
1docker build --tag "nestjs-api" .Sending build context to Docker daemon 856.1kB Step 1/7 : FROM node:18-alpine —> 264f8646c2a6 Step 2/7 : WORKDIR /user/src/app —> Running in a157ed686647 Removing intermediate container a157ed686647 —> 665415e6101e Step 3/7 : COPY . . —> 6ba70bc9f752 Step 4/7 : RUN npm ci –omit=dev —> Running in 4206995663aa added 565 packages, and audited 566 packages in 17s 59 packages are looking for funding run npm fund for details found 0 vulnerabilities Removing intermediate container 4206995663aa —> c98bfbb842ec Step 5/7 : RUN npm run build —> Running in af3ae30c58da > nest-typescript-starter@1.0.0 prebuild > rimraf dist > nest-typescript-starter@1.0.0 build > nest build Removing intermediate container af3ae30c58da —> 26021dcbe202 Step 6/7 : USER node —> Running in bbfe5d194ada Removing intermediate container bbfe5d194ada —> ad1331df0fa1 Step 7/7 : CMD [“npm”, “run”, “start:prod”] —> Running in 1691c04e966b Removing intermediate container 1691c04e966b —> 1268ba0ec302 Successfully built 1268ba0ec302 Successfully tagged nestjs-api:latest
The crucial part of the above process is that Docker ran npm ci and npm run build but didn’t run npm run start:prod yet.
By adding --tag "nestjs-api" we’ve chosen a name for our Docker image. Thanks to that, we will be able to refer to it later.
Using Docker Compose to run multiple containers
Once we’ve built a Docker image, we can run a Docker container based on it. The container is a runtime instance of the image.
We could run our image with a single command:
1docker run nestjs-apiThe above command creates a Docker container and executes npm run start:prod inside.
Often an application consists of multiple Docker containers, though. In our case, we want to run PostgreSQL and let our NestJS API connect with it. Maintaining multiple containers and ensuring they can communicate with each other can be challenging.
Fortunately, we can use the Docker Compose tool that helps us run multi-container Docker applications. When working with it, we need to definite the configuration of our application using YAML.
1version: "3"
2services:
3 postgres:
4 image: postgres:15.1
5 networks:
6 - postgres
7 volumes:
8 - /data/postgres:/data/postgres
9 env_file:
10 - docker.env
11
12 pgadmin:
13 image: dpage/pgadmin4:6.18
14 networks:
15 - postgres
16 ports:
17 - "8080:80"
18 volumes:
19 - /data/pgadmin:/root/.pgadmin
20 env_file:
21 - docker.env
22
23 nestjs-api:
24 image: nestjs-api
25 env_file:
26 - .env
27 ports:
28 - "3000:3000"
29 depends_on:
30 - postgres
31 networks:
32 - postgres
33
34networks:
35 postgres:
36 driver: bridgeA few significant things are happening above, so let’s break the file down.
Creating a custom network
We create a custom network by adding the postgres entry in the networks section.
1networks:
2 postgres:
3 driver: bridgeThanks to the above, we can easily specify which Docker containers can communicate with each other.
Setting up PostgreSQL
The first Docker container we specify in the services section uses the postgres:15.1 image.
It’s always a good idea to use a specific version instead of postgres:latest.
1postgres:
2 image: postgres:15.1
3 networks:
4 - postgres
5 volumes:
6 - /data/postgres:/data/postgres
7 env_file:
8 - docker.envYou can see above that we add it to the postgres network so that other Docker containers can communicate with it.
Thanks to configuring volumes, we allow the Docker container to persist data outside the container. Thanks to doing that, once we shut down the container and rerun it, we don’t end up with an empty database.
By specifying the env_file property, we can provide a set of environment variables the container will use, such as the credentials required to connect to the database.
1POSTGRES_USER=admin
2POSTGRES_PASSWORD=admin
3POSTGRES_DB=nestjs
4PGADMIN_DEFAULT_EMAIL=admin@admin.com
5PGADMIN_DEFAULT_PASSWORD=adminConfiguring pgAdmin
pgAdmin is a useful tool to help us manage our PostgreSQL database. Our pgAdmin container runs using the dpage/pgadmin4:6.18 image.
1pgadmin:
2 image: dpage/pgadmin4:6.18
3 networks:
4 - postgres
5 ports:
6 - "8080:80"
7 volumes:
8 - /data/pgadmin:/root/.pgadmin
9 env_file:
10 - docker.envThe new thing to grasp in the above code is the ports section. By default, pgAdmin runs on port 80. Thanks to adding 8080:80 to the ports section Docker exposes pgAdmin outside of the Docker container and allows us to access it through http://localhost:8080 on our machine.
To log in, we need to use the credentials provided in the docker.env file.
To connect to our PostgresSQL database, we also need to use the credentials provided in the docker.env file.
Since both pgAdmin and PostgreSQL run in the Docker containers, we need to provide postgres as the host name. It matches the name of the postgres service specified in the docker-compose.yml file.
Setting up our NestJS application
The last step is to add our NestJS application to our docker-compose configuration.
1nestjs-api:
2 image: nestjs-api
3 env_file:
4 - .env
5 ports:
6 - "3000:3000"
7 depends_on:
8 - postgres
9 networks:
10 - postgresThe most crucial part of the above code is the image: nestjs-api. The name of the Docker image needs to match the docker build --tag "nestjs-api" . command we’ve used before.
By adding .env to the env_file section, we can provide a list of environment variables for our NestJS application.
1POSTGRES_HOST=postgres
2POSTGRES_PORT=5432
3POSTGRES_USER=admin
4POSTGRES_PASSWORD=admin
5POSTGRES_DB=nestjs
6
7JWT_SECRET=lum!SXwGL00Q
8JWT_EXPIRATION_TIME=21600
9
10PORT=3000As you can see above, our NestJS application runs on port 3000 inside the Docker container. We also expect Docker to expose our NestJS API outside of the container. By adding "3000:3000" to the ports section, we specify that we want to be able to reach our NestJS API using http://localhost:3000.
By adding postgres to the depends_on array, we state that our NestJS API shouldn’t run before the PostgreSQL service was initialized.
Running all of the Docker containers
The last step is to run the above Docker Compose configuration.
1docker-compose upOnce we run Docker Compose, it creates and runs all specified Docker containers. Now our application is up and running. It’s also accessible outside of Docker.
Summary
In this article, we went through the configuration required to run NestJS in a Docker container properly. To do that, we had to learn how to write the Dockerfile and build our Docker image. We also created the Docker Compose configuration that allowed us to run other services our NestJS API needs, such as PostgreSQL. Finally, we also made sure that all of our Docker containers could communicate with each other.
There is still more to learn when it comes to using NestJS with Docker, so stay tuned!