The previous article taught us how to use Docker and Docker Compose with NestJS. In this article, we expand our knowledge by applying various tricks and tips on increasing the development experience with Docker and Docker Compose.
Check out this repository if you want to see the full code for this article.
Building the Docker Image automatically
In the previous part of this series, we built our basic Dockerfile.
1FROM node:18-alpine
2
3WORKDIR /user/src/app
4
5COPY . .
6
7RUN npm ci --omit=dev
8
9RUN npm run build
10
11USER node
12
13CMD ["npm", "run", "start:prod"]Then, we built the Docker image by running an appropriate command and giving it a tag.
1docker build --tag "nestjs-api" .Then, we added the above nestjs-api tag to our Docker Compose configuration.
1version: "3"
2services:
3 nestjs-api:
4 image: nestjs-api
5 env_file:
6 - .env
7 ports:
8 - "3000:3000"
9 depends_on:
10 - postgres
11 networks:
12 - postgres
13
14# ...Once we have the above file, we can run docker-compose up to run our application.
Automating the process of building the image
Unfortunately, the above process requires the developers to run two commands instead of one. Also, we must remember to run docker build every time there is a change in our code.
Instead, we can point Docker Compose to our Dockerfile and expect it to build the required Docker image.
1version: "3"
2services:
3 postgres:
4 image: postgres:15.1
5 networks:
6 - postgres
7 volumes:
8 - /data/postgres:/data/postgres
9 env_file:
10 - docker.env
11
12 pgadmin:
13 image: dpage/pgadmin4:6.18
14 networks:
15 - postgres
16 ports:
17 - "8080:80"
18 volumes:
19 - /data/pgadmin:/root/.pgadmin
20 env_file:
21 - docker.env
22
23 nestjs-api:
24 build:
25 context: .
26 env_file:
27 - .env
28 ports:
29 - "3000:3000"
30 depends_on:
31 - postgres
32 networks:
33 - postgres
34
35networks:
36 postgres:
37 driver: bridgeThanks to adding the build section to our configuration and pointing to the directory with the Dockerfile, we can expect Docker Compose to build the necessary image.
There is one caveat, though. To ensure that Docker Compose always rebuilds the image even if an old version is available, we need to add the --build flag.
1docker-compose up --buildDealing with cache
By adding the --build flag, we expect Docker Compose to rebuild our image every time we run docker-compose up. Let’s look at how Docker handles cache to avoid waiting too much time for the build to finish.
Each instruction in our Dockerfile roughly translates to a layer in our image. Therefore, whenever a layer changes, it must be rebuilt together with all the following layers.
Let’s say we made a slight change in our main.ts file. Unfortunately, it affects the COPY . . command since we use it to copy all of our files.
FROM node:18-alpine WORKDIR /user/src/app COPY . . RUN npm ci –omit=dev RUN npm run build USER node CMD [“npm”, “run”, “start:prod”]
Due to how we structured our Dockerfile, making changes to our source code causes Docker to reinitialize our whole node_modules directory with the npm ci command. Let’s improve that by changing how we use the COPY instruction.
1FROM node:18-alpine
2
3WORKDIR /user/src/app
4
5COPY package.json package-lock.json ./
6
7RUN npm ci --omit=dev
8
9COPY . .
10
11RUN npm run build
12
13USER node
14
15CMD ["npm", "run", "start:prod"]Above, we first copy only the package.json and package-lock.json files. Then, we install all of the dependencies. We can see it as a milestone that Docker reaches and stores in the cache. Now, Docker knows that modifying the main.ts file does not affect the npm ci command and does not reinstall the packages unnecessarily.
FROM node:18-alpine WORKDIR /user/src/app COPY package.json package-lock.json ./ RUN npm ci –omit=dev COPY . . RUN npm run build USER node CMD [“npm”, “run”, “start:prod”]
The above approach can drastically decrease the time required for the Docker image to be built.
Restarting the application on changes
Applying changes we made to our source code now takes a bit of work. First, we need to stop all of our Docker containers and then rerun them. It causes the Docker image with the API to be rebuilt.
Instead, when running our application in development, we can do the following:
- install the necessary dependencies,
- run the npm run start:dev command
When using the above approach, NestJS watches for any changes made to the source code and restarts automatically.
Implementing a multi-stage Docker build
The issue is that when we build our Docker image using the Dockerfile, it always creates a production build and ends with npm run start:prod.
The first step to changing the above is to implement a multi-stage build. Thanks to this approach, we don’t need separate Dockerfile for development and production. Instead, we divide our Dockerfile into stages.
Each stage begins with a FROM statement. We can copy files between stages, leaving behind any files we don’t need anymore. Thanks to that, we can achieve a smaller Docker image.
1# Installing dependencies:
2
3FROM node:18-alpine AS install-dependencies
4
5WORKDIR /user/src/app
6
7COPY package.json package-lock.json ./
8
9RUN npm ci --omit=dev
10
11COPY . .
12
13
14# Creating a build:
15
16FROM node:18-alpine AS create-build
17
18WORKDIR /user/src/app
19
20COPY --from=install-dependencies /user/src/app ./
21
22RUN npm run build
23
24USER node
25
26
27# Running the application:
28
29FROM node:18-alpine AS run
30
31WORKDIR /user/src/app
32
33COPY --from=install-dependencies /user/src/app/node_modules ./node_modules
34COPY --from=create-build /user/src/app/dist ./dist
35COPY package.json ./
36
37CMD ["npm", "run", "start:prod"]Each of our stages above use the node:18-alpine image as base, but that does not have to be the case.
Please notice above that our final Docker image contains only node_modules, dist, and package.json. Thanks to that, we’ve managed to shave off some unnecessary data by copying only the files necessary to run the application.
Modifying the Docker Compose configuration
Thanks to dividing our Dockerfile into stages, we can tell Docker Compose to target a specific stage.
1version: "3"
2services:
3 nestjs-api:
4 build:
5 context: .
6 target: install-dependencies
7 command: npm run start:dev
8 volumes:
9 - ./src:/user/src/app/src
10 env_file:
11 - .env
12 ports:
13 - "3000:3000"
14 depends_on:
15 - postgres
16 networks:
17 - postgres
18
19# ...Above, we explicitly tell Docker only to run the install-dependencies stage from our Dockerfile. This means that Docker won’t create a production build.
Since our install-dependencies stage does not contain the CMD instruction, we need some way to tell Docker what to do. We do that by adding command: npm run start:dev to our Docker Compose configuration.
So far, we’ve been using the volumes property to allow our PostgreSQL Docker container to persist the data outside of the container. Thanks to doing that, when we run our PostgreSQL container after it’s been shut down, we don’t end up with an empty database. We can use the same approach to the Docker container with our NestJS application.
Thanks to adding ./src:/user/src/app/src to our volumes, Docker synchronizes the src directory in the Docker container with the src directory on our host machine. Thanks to that, whenever we change our source code, the npm run start:dev process is aware of it and restarts our NestJS application.
Running the debugger
A very big part of the developer experience is to be able to use a debugger. Fortunately, we can connect the debugger to a Node.js application running in a container.
First, let’s add a new script into our package.json file.
1{
2 "scripts": {
3 "start:inspect": "nest start --debug 0.0.0.0:9229 --watch --exec 'node --inspect-brk'",
4 // ...
5 },
6 // ...
7}A few important things are happening above. First, we add the --debug 0.0.0.0:9229 to establish a WebSocket connection that our debugger can connect to. Our debugger also might require the --inspect-brk flag, but the NestJS CLI does not support it out of the box anymore. Because of that, we need to use the hack with the --exec flag.
We also need to allow our host machine to establish a connection with our Docker container on port 9229. To do that, we need to slightly alter the ports section in our Docker Compose configuration.
1version: "3"
2services:
3 nestjs-api:
4 build:
5 context: .
6 target: install-dependencies
7 command: npm run start:inspect
8 volumes:
9 - ./src:/user/src/app/src
10 env_file:
11 - .env
12 ports:
13 - "3000:3000"
14 - "9229:9229"
15 depends_on:
16 - postgres
17 networks:
18 - postgres
19
20# ...Please notice that we are running the npm run start:inspect command above.
Debugging through WebStorm
To debug our application running in a Docker container using WebStorm, we first need to run docker-compose up --build in the terminal to run all of our containers. Remember that because we’ve used the --inspect-brk flag, our NestJS application will not run until we connect the debugger.
Then, we need to go to “Run -> Edit Configurations” and create the “Attach to Node.js/Chrome” configuration by clicking on the plus icon.
Once we do that, we need to check the “Reconnect automatically” checkbox so that the debugger reconnects when our application restarts after changes.
As soon as we choose “Run -> Debug ‘Attach to container’, WebStorm connects the debugger through a WebSocket to our Docker container.
You can also debug using Visual Studio Code in a similar way. Check out the official Visual Studio Code documentation for a step-by-step explanation.
Summary
In this article, we implemented a few significant developer experience improvements that make our work with NestJS and Docker easier. We’ve automated building the Docker image by changing our Docker Compose configuration. When doing the above, we’ve also improved how we handle cache by making subtle changes to our Dockerfile. We also learned how to restart our application on changes and use the debugger with the NestJS app running in the container. All of the above can definitely increase the developer experience and make our job easier.