Nest.js Tutorial

Cache with Redis. Running the app in a Node.js cluster

Marcin Wanago
JavaScriptNestJSTypeScript

Redis is a fast and reliable key-value store. It keeps the data in its memory, although Redis, by default, writes the data to the file system at least every 2 seconds.

In the previous part of this series, we’ve used a cache stored in our application’s memory. While it is simple and efficient, it has its downsides. With applications where performance and availability are crucial, we often run multiple instances of our API. With that, the incoming traffic is load-balanced and redirected to multiple instances.

Unfortunately, keeping the cache within the memory of the application means that multiple instances of our API do not share the same cache. Also, restarting the API means losing the cache. Because of all of that, it is worth looking into Redis.

Setting up Redis

Within this series, we’ve used Docker Compose to set up our architecture. It is also very straightforward to set up Redis with Docker. By default, Redis works on port 6379.

docker-compose.yml
1version: "3"
2services:
3  redis:
4    image: "redis:alpine"
5    ports:
6      - "6379:6379"
7# ...

To connect Redis to NestJS, we also need the cache-manager-redis-store library.

1npm install cache-manager-redis-store

Unfortunately, this library is not prepared to work with TypeScript. To deal with that, we can create our own declaration file.

cacheManagerRedisStore.d.ts
1declare module 'cache-manager-redis-store' {
2  import { CacheStoreFactory } from '@nestjs/common/cache/interfaces/cache-manager.interface';
3 
4  const cacheStore: CacheStoreFactory;
5 
6  export = cacheStore;
7}

To connect to Redis, we need two new environment variables: the host and the port.

app.module.ts
1import { Module } from '@nestjs/common';
2import { ConfigModule } from '@nestjs/config';
3import * as Joi from '@hapi/joi';
4 
5@Module({
6  imports: [
7    ConfigModule.forRoot({
8      validationSchema: Joi.object({
9        REDIS_HOST: Joi.string().required(),
10        REDIS_PORT: Joi.number().required(),
11        // ...
12      })
13    }),
14    // ...
15  ],
16  controllers: [],
17  providers: [],
18})
19export class AppModule {}
.env
1REDIS_HOST=localhost
2REDIS_PORT=6379
3# ...

Once we do all of the above, we can use it to establish a connection with Redis.

posts.module.ts
1import * as redisStore from 'cache-manager-redis-store';
2import { CacheModule, Module } from '@nestjs/common';
3import PostsController from './posts.controller';
4import PostsService from './posts.service';
5import Post from './post.entity';
6import { TypeOrmModule } from '@nestjs/typeorm';
7import { SearchModule } from '../search/search.module';
8import PostsSearchService from './postsSearch.service';
9import { ConfigModule, ConfigService } from '@nestjs/config';
10 
11@Module({
12  imports: [
13    CacheModule.registerAsync({
14      imports: [ConfigModule],
15      inject: [ConfigService],
16        useFactory: (configService: ConfigService) => ({
17          store: redisStore,
18          host: configService.get('REDIS_HOST'),
19          port: configService.get('REDIS_PORT'),
20          ttl: 120
21        }),
22    }),
23    TypeOrmModule.forFeature([Post]),
24    SearchModule,
25  ],
26  controllers: [PostsController],
27  providers: [PostsService, PostsSearchService],
28})
29export class PostsModule {}

Managing our Redis server with an interface

As we use our app, we might want to look into our Redis data storage. A straightforward way to do that would be to set up Redis Commander through Docker Compose.

docker-compose.yml
1version: "3"
2services:
3  redis:
4    image: "redis:alpine"
5    ports:
6      - "6379:6379"
7 
8  redis-commander:
9    image: rediscommander/redis-commander:latest
10    environment:
11      - REDIS_HOSTS=local:redis:6379
12    ports:
13      - "8081:8081"
14    depends_on:
15        - redis
16# ...
With depends_on above we make sure that redis has been started before running Redis Commander

Running Redis Commander in such a way creates a web user interface that we can see at http://localhost:8081/.

Thanks to the way we set up the cache in the previous part of this series, we can now have multiple cache keys for the /posts endpoint.

Running multiple instances of NestJS

JavaScript is single-threaded in nature. Although that’s the case, in the tenth article of the Node.js TypeScript series, we’ve learned that Node.js is capable of performing multiple tasks at a time. Aside from the fact that it runs input and output operations in separate threads, Node.js allows us to create multiple processes.

To prevent heavy traffic from putting a strain on our API, we can also launch a cluster of Node.js processes. Such child processes share server ports and work under the same address. With that, the cluster works as a load balancer.

With Node.js we can also use Worker Threads. To read more about it, check out Node.js TypeScript #12. Introduction to Worker Threads with TypeScript
runInCluster.ts
1import * as cluster from 'cluster';
2import * as os from 'os';
3 
4export function runInCluster(
5  bootstrap: () => Promise<void>
6) {
7  const numberOfCores = os.cpus().length;
8 
9  if (cluster.isMaster) {
10    for (let i = 0; i < numberOfCores; ++i) {
11      cluster.fork();
12    }
13  } else {
14    bootstrap();
15  }
16}

In the example above, our main process creates a child process for each core in our CPU. By default, Node.js uses the round-robin approach in which the master process listens on the port we’ve opened. It accepts incoming connections and distributes them across all of the processes in our cluster. Round-robin is a default policy on all platforms except Windows.

If you want to read more about the cluster and how to change the scheduling policy, check out Node.js TypeScript #11. Harnessing the power of many processes using a cluster

To use the above logic, we need to supply it with our bootstrap function. A fitting place for that would be the main.ts file:

main.ts
1import { NestFactory } from '@nestjs/core';
2import { AppModule } from './app.module';
3import * as cookieParser from 'cookie-parser';
4import { ValidationPipe } from '@nestjs/common';
5import { ExcludeNullInterceptor } from './utils/excludeNull.interceptor';
6import { ConfigService } from '@nestjs/config';
7import { config } from 'aws-sdk';
8import { runInCluster } from './utils/runInCluster';
9 
10async function bootstrap() {
11  const app = await NestFactory.create(AppModule);
12  app.useGlobalPipes(new ValidationPipe({
13    transform: true
14  }));
15  app.useGlobalInterceptors(new ExcludeNullInterceptor());
16  app.use(cookieParser());
17 
18  const configService = app.get(ConfigService);
19  config.update({
20    accessKeyId: configService.get('AWS_ACCESS_KEY_ID'),
21    secretAccessKey: configService.get('AWS_SECRET_ACCESS_KEY'),
22    region: configService.get('AWS_REGION'),
23  });
24 
25  await app.listen(3000);
26}
27runInCluster(bootstrap);

On Linux, we can easily check how many processes our cluster spawns with ps -e | grep node:

Summary

In this article, we added to the topic of caching by using Redis. One of its advantages is that the Redis cache can be shared across multiple instances of our application. To experience it, we’ve used the Node.js cluster to spawn multiple processes containing our API. The Node.js delegates the incoming requests to various processes, balancing the load.