While storing files directly in the database is doable, it might not be the best approach. Files can take a lot of space, and it might impact the performance of the application. Also, it increases the size of the database and, therefore, makes backups bigger and slower. A good alternative is storing files separately using an external provider, such as Google Cloud, Azure, or Amazon AWS.
In this article, we look into uploading files to Amazon Simple Storage Service, also referred to as S3. You can find all the code from this series in this repository.
Connecting to Amazon S3
Amazon S3 provides storage that we can use with any type of file. We organize files into buckets and manage them in our API through an SDK.
Once we create the AWS account, we can log in as a root user. Even though we might authorize as root to use the S3 through our API, this is not the best approach.
Setting up a user
Let’s create a new user that has a restricted set of permissions. To do so, we need to open the Identity and Access Management (IAM) panel and create a user:
Since we want this user to be able to manage everything connected to S3, let’s set up proper access.
After doing that, we are presented with Access key ID and Secret access key. We need those to connect to AWS through our API. We also need to choose one of the available regions.
Let’s add them to our .env file:
1# ...
2AWS_REGION=eu-central-1
3AWS_ACCESS_KEY_ID=*******
4AWS_SECRET_ACCESS_KEY=*******Also, let’s add it to our environment variables validation schema in AppModule:
1ConfigModule.forRoot({
2 validationSchema: Joi.object({
3 POSTGRES_HOST: Joi.string().required(),
4 POSTGRES_PORT: Joi.number().required(),
5 POSTGRES_USER: Joi.string().required(),
6 POSTGRES_PASSWORD: Joi.string().required(),
7 POSTGRES_DB: Joi.string().required(),
8 JWT_SECRET: Joi.string().required(),
9 JWT_EXPIRATION_TIME: Joi.string().required(),
10 AWS_REGION: Joi.string().required(),
11 AWS_ACCESS_KEY_ID: Joi.string().required(),
12 AWS_SECRET_ACCESS_KEY: Joi.string().required(),
13 PORT: Joi.number(),
14 })
15}),Connecting to AWS through SDK
Once we have the necessary variables, we can connect to AWS using the official SDK for Node. Let’s install it first.
1npm install aws-sdk @types/aws-sdkSince we’ve got everything that we need to configure the SDK, let’s use it. One of the ways to do so is to use aws-sdk straight in our main.ts file.
1import { NestFactory } from '@nestjs/core';
2import { AppModule } from './app.module';
3import * as cookieParser from 'cookie-parser';
4import { ValidationPipe } from '@nestjs/common';
5import { ExcludeNullInterceptor } from './utils/excludeNull.interceptor';
6import { ConfigService } from '@nestjs/config';
7import { config } from 'aws-sdk';
8
9async function bootstrap() {
10 const app = await NestFactory.create(AppModule);
11 app.useGlobalPipes(new ValidationPipe());
12 app.useGlobalInterceptors(new ExcludeNullInterceptor());
13 app.use(cookieParser());
14
15 const configService = app.get(ConfigService);
16 config.update({
17 accessKeyId: configService.get('AWS_ACCESS_KEY_ID'),
18 secretAccessKey: configService.get('AWS_SECRET_ACCESS_KEY'),
19 region: configService.get('AWS_REGION'),
20 });
21
22 await app.listen(3000);
23}
24bootstrap();Creating our first bucket
In Amazon S3 data is organized in buckets. We can have multiple buckets with different settings.
Let’s open the Amazon S3 panel and create a bucket. Please note that the name of the bucket must be unique.
We can set up our bucket to contain public files. All files that we upload to this bucket will be publicly available. We might use it to manage files such as avatars.
The last step here is to add the name of the bucket to our environment variables.
1# ...
2AWS_PUBLIC_BUCKET_NAME=nestjs-series-public-bucket1ConfigModule.forRoot({
2 validationSchema: Joi.object({
3 // ...
4 AWS_PUBLIC_BUCKET_NAME: Joi.string().required(),
5 })
6}),Uploading images through our API
Since we’ve got the AWS connection set up, we can proceed with uploading our files. For starters, let’s create a PublicFile entity.
1import { Column, Entity, PrimaryGeneratedColumn } from 'typeorm';
2
3@Entity()
4class PublicFile {
5 @PrimaryGeneratedColumn()
6 public id: number;
7
8 @Column()
9 public url: string;
10
11 @Column()
12 public key: string;
13}
14
15export default PublicFile;By saving the URL directly in the database, we can access the public file very quickly. The key property uniquely identifies the file in the bucket. We need it to access the file, for example, if we want to delete it.
The next step is creating a service that uploads files to the bucket and saves the data about the file to our Postgres database. Since we want keys to be unique, we use the uuid library:
1npm install uuid @types/uuid1import { Injectable } from '@nestjs/common';
2import { InjectRepository } from '@nestjs/typeorm';
3import { Repository } from 'typeorm';
4import PublicFile from './publicFile.entity';
5import { S3 } from 'aws-sdk';
6import { ConfigService } from '@nestjs/config';
7import { v4 as uuid } from 'uuid';
8
9@Injectable()
10export class FilesService {
11 constructor(
12 @InjectRepository(PublicFile)
13 private publicFilesRepository: Repository<PublicFile>,
14 private readonly configService: ConfigService
15 ) {}
16
17 async uploadPublicFile(dataBuffer: Buffer, filename: string) {
18 const s3 = new S3();
19 const uploadResult = await s3.upload({
20 Bucket: this.configService.get('AWS_PUBLIC_BUCKET_NAME'),
21 Body: dataBuffer,
22 Key: `${uuid()}-${filename}`
23 })
24 .promise();
25
26 const newFile = this.publicFilesRepository.create({
27 key: uploadResult.Key,
28 url: uploadResult.Location
29 });
30 await this.publicFilesRepository.save(newFile);
31 return newFile;
32 }
33}The uploadPublicFile method expects a buffer. It is a chunk of memory that keeps a binary representation of our file. If you want to know more about it, check out Node.js TypeScript #3. Explaining the Buffer.
Creating an endpoint for uploading files
Now we need to create an endpoint for the user to upload the avatar. To link the files with users, we need to modify the UserEntity by adding the avatar column.
1import { Entity, JoinColumn, OneToOne } from 'typeorm';
2import PublicFile from '../files/publicFile.entity';
3
4@Entity()
5class User {
6 // ...
7
8 @JoinColumn()
9 @OneToOne(
10 () => PublicFile,
11 {
12 eager: true,
13 nullable: true
14 }
15 )
16 public avatar?: PublicFile;
17}
18
19export default User;If you wan to know more about relationships with Postgres and TypeORM, check out API with NestJS #7. Creating relationships with Postgres and TypeORM
Let’s add a method to the UsersService that uploads files and links them to the user.
1import { Injectable } from '@nestjs/common';
2import { InjectRepository } from '@nestjs/typeorm';
3import { Repository } from 'typeorm';
4import User from './user.entity';
5import { FilesService } from '../files/files.service';
6
7@Injectable()
8export class UsersService {
9 constructor(
10 @InjectRepository(User)
11 private usersRepository: Repository<User>,
12 private readonly filesService: FilesService
13 ) {}
14
15 // ...
16
17 async getById(id: number) {
18 const user = await this.usersRepository.findOne({ id });
19 if (user) {
20 return user;
21 }
22 throw new HttpException('User with this id does not exist', HttpStatus.NOT_FOUND);
23 }
24
25 async addAvatar(userId: number, imageBuffer: Buffer, filename: string) {
26 const avatar = await this.filesService.uploadPublicFile(imageBuffer, filename);
27 const user = await this.getById(userId);
28 await this.usersRepository.update(userId, {
29 ...user,
30 avatar
31 });
32 return avatar;
33 }
34}This might be a fitting place to include some functionalities like checking the size of the image or compressing it.
The last piece is adding an endpoint the users can send avatars to. To do that, we follow the NestJS documentation and use the FileInterceptor that utilizes multer under the hood.
1import { UsersService } from './users.service';
2import { Controller, Post, Req, UploadedFile, UseGuards, UseInterceptors } from '@nestjs/common';
3import JwtAuthenticationGuard from '../authentication/jwt-authentication.guard';
4import RequestWithUser from '../authentication/requestWithUser.interface';
5import { FileInterceptor } from '@nestjs/platform-express';
6import { Express } from 'express';
7
8@Controller('users')
9export class UsersController {
10 constructor(
11 private readonly usersService: UsersService,
12 ) {}
13
14 @Post('avatar')
15 @UseGuards(JwtAuthenticationGuard)
16 @UseInterceptors(FileInterceptor('file'))
17 async addAvatar(@Req() request: RequestWithUser, @UploadedFile() file: Express.Multer.File) {
18 return this.usersService.addAvatar(request.user.id, file.buffer, file.originalname);
19 }
20}The file above has quite a few useful properties such as the mimetype. You can use it if you want some additional validation and disallow certain types of files.
Deleting existing files
Aside from uploading files, we also need a way to remove them. To keep our database consistent with the Amazon S3 storage, we remove the files from both places. First, let’s add the method to the FilesService.
1import { Injectable } from '@nestjs/common';
2import { InjectRepository } from '@nestjs/typeorm';
3import { Repository } from 'typeorm';
4import PublicFile from './publicFile.entity';
5import { S3 } from 'aws-sdk';
6import { ConfigService } from '@nestjs/config';
7
8@Injectable()
9export class FilesService {
10 constructor(
11 @InjectRepository(PublicFile)
12 private publicFilesRepository: Repository<PublicFile>,
13 private readonly configService: ConfigService
14 ) {}
15
16 // ...
17
18 async deletePublicFile(fileId: number) {
19 const file = await this.publicFilesRepository.findOne({ id: fileId });
20 const s3 = new S3();
21 await s3.deleteObject({
22 Bucket: this.configService.get('AWS_PUBLIC_BUCKET_NAME'),
23 Key: file.key,
24 }).promise();
25 await this.publicFilesRepository.delete(fileId);
26 }
27}Now, we need to use it in our UsersService. Important addition is that when a user uploads an avatar while already having one, we delete the old one.
1import { Injectable } from '@nestjs/common';
2import { InjectRepository } from '@nestjs/typeorm';
3import { Repository } from 'typeorm';
4import User from './user.entity';
5import { FilesService } from '../files/files.service';
6
7@Injectable()
8export class UsersService {
9 constructor(
10 @InjectRepository(User)
11 private usersRepository: Repository<User>,
12 private readonly filesService: FilesService
13 ) {}
14
15 // ...
16
17 async addAvatar(userId: number, imageBuffer: Buffer, filename: string) {
18 const user = await this.getById(userId);
19 if (user.avatar) {
20 await this.usersRepository.update(userId, {
21 ...user,
22 avatar: null
23 });
24 await this.filesService.deletePublicFile(user.avatar.id);
25 }
26 const avatar = await this.filesService.uploadPublicFile(imageBuffer, filename);
27 await this.usersRepository.update(userId, {
28 ...user,
29 avatar
30 });
31 return avatar;
32 }
33
34 async deleteAvatar(userId: number) {
35 const user = await this.getById(userId);
36 const fileId = user.avatar?.id;
37 if (fileId) {
38 await this.usersRepository.update(userId, {
39 ...user,
40 avatar: null
41 });
42 await this.filesService.deletePublicFile(fileId)
43 }
44 }
45}Summary
In this article, we’ve learned the basics of how Amazon S3 works and how to use it in our API. To do that, we’ve provided the necessary credentials to AWS SDK. Thanks to that, we were able to upload and delete files to AWS. We’ve also kept our database in sync with Amazon S3, to track our files. To upload files through API we’ve used the FileInterceptor, which uses Multer under the hood.
Since there is more to Amazon S3 than handling public files, there is still quite a bit to cover here, and you might expect it in this series.