Improve Your Frontend App Performance with NGINX Compression
Posted October 30, 2022
Aside from developing your frontend application, one of the most important things is to deploy and run it effectively. Let us imagine that there is some frontend application, that should be deployed through NGINX with Docker. It’s rather easy to make, there are a lot of guides how to do it. But most of it using only default NGINX config.
We can improve application loading time using NGINX response compression methods. They reduce the size of transmitted data from web-server to client, so the browsers decompress obtained data. There are two standard algorithms to compress data, that’s widely supported on the web:
I won’t go into details how these algorithms work, but focus on how to set NGINX config and show the difference between response data size for each compression method.
Create and Build Project
For this part, you can use your own ready application. For demonstration
purposes, I will use Vite.js empty template project as frontend application
and lightweight alpine
images for Node.js and NGINX.
All commands examples are for Unix systems.
-
Create basic application with Vite.js templates as default TypeScript vanilla project. You can try any other template, add your features and run the development server (follow the CLI tips).
# npm npm create vite@latest my-dockerized-app -- --template vanilla-ts # or yarn yarn create vite my-dockerized-app --template vanilla-ts
-
Create
Dockerfile
and.dockerignore
file.cd my-dockerized-app touch Dockerfile .dockerignore
-
Add
node_modules
anddist
directories paths to.dockerignore
file. If you have installed dependencies, run or built project. Such artifacts should be out of the Docker build context./dist /node_modules
-
Fill the
Dockerfile
. We will use multi-stage build. At first stage, Docker will build frontend application with node image, and after will copy from first to second stage withnginx
image.FROM node:lts-alpine as frontend-build WORKDIR /app COPY package*.json ./ RUN npm i COPY . . RUN npm run build FROM nginx:alpine COPY --from=frontend-build /app/dist /usr/share/nginx/html
-
Build and run the container.
docker build -t nginx-frontend . docker run -p 80:80/tcp --name nginx-frontend nginx-frontend
Now we can test our built and containerized application with curl
and -I
flag to show only response headers.
curl -I http://localhost
Below, you can see the truncated response for the index page, which contains
illustrative headers. We are interested in Content-Length
header, which displays
the size of the response body in bytes.
HTTP/1.1 200 OK
Server: nginx/1.23.1
Content-Type: text/html
Content-Length: 448
Connection: keep-alive
Accept-Ranges: bytes
HTML pages usually weight a little, so the more visual showcase would be to request JavaScript asset file.
curl -I http://localhost/assets/index.acb3e620.js
Most likely in your case filename hash for JS asset would be different,
so you can find it among requests during index page loading at network tab
in your browser. Without browser, you can find .js
-filename by the command below.
docker exec nginx-frontend ls /usr/share/nginx/html/assets | grep .js
So, as you can see below, the size of the transferred JavaScript asset file is 1436 bytes. Let’s see, how can we reduce it.
HTTP/1.1 200 OK
Server: nginx/1.23.1
Content-Type: application/javascript
Content-Length: 1436
Connection: keep-alive
Accept-Ranges: bytes
gzip
The first option is to use gzip compression, which is batteries included compression module at default NGINX package, but we should edit the configuration file.
-
Stop the running container and add the configuration file.
touch nginx.conf
-
Edit
Dockerfile
to copy created configuration file to container.FROM node:lts-alpine as frontend-build WORKDIR /app COPY package*.json ./ RUN npm i COPY . . RUN npm run build FROM nginx:alpine COPY --from=frontend-build /app/dist /usr/share/nginx/html COPY nginx.conf /etc/nginx/nginx.conf
-
Fill the
nginx.conf
file with the following lines.user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; gzip on; gzip_types application/javascript image/svg+xml text/css; gzip_min_length 100; include /etc/nginx/conf.d/*.conf; }
Such configuration overrides the default NGINX configuration with three directives
gzip
,gzip_types
andgzip_min_length
. The first one activates gzip compression for web server, the another one specifies MIME types for files to compress and the last one sets the minimum length of a response in bytes that will be gzipped.As we saw earlier, our HTML-page weights less than 500 bytes,
gzip_min_length
directive depends on initialContent-Length
header value and this determines whether the compression will be done, that’s why we have set the value to100
bytes. -
Delete the previous container, rebuild and run the updated one.
docker rm nginx-frontend docker build -t nginx-frontend . docker run -p 80:80/tcp --name nginx-frontend nginx-frontend
-
Test the size of obtained compressed file. To enable
Accept-Encoding
request headers forcurl
, we should add--compressed
option.curl --compressed -I http://localhost
You can see the new response header
Content-Encoding
, which is displaying the name of encoding algorithm.HTTP/1.1 200 OK Server: nginx/1.23.1 Content-Type: text/html Connection: keep-alive Content-Encoding: gzip
With data encoding, we couldn’t use
Content-Length
header anymore to get the size of response because of reasons.curl --compressed -so /dev/null http://localhost -w '%{size_download} bytes\n'
The command above formats information on stdout after a completed transfer and displays the total amount of bytes that were downloaded.
300 bytes
This is a much better result than 448 bytes without compression. So, let’s try to test the JavaScript asset file.
curl --compressed -so /dev/null http://localhost/assets/index.acb3e620.js -w '%{size_download} bytes\n'
752 bytes
The output result is almost twice better. 752 bytes against 1436 bytes. But don’t forget that such performance improvements benefit only clients and transferred data amount. Compressing algorithms increase computational workload for web servers, because the data is compressed dynamically, i.e. on the fly.
Another way to reduce workload on a web server that distributes compressed files is to build the application as precompressed gzipped static bundle, that can be statically stored and sent, but the NGINX should have static compression module and another configuration.
Brotli
Another compressing option is Brotli module. Current latest version of NGINX (1.23 at the time of publication) doesn’t have included Brotli module, that’s why we need to download and configure NGINX with external ngx_brotli module.
-
Stop the running container and edit
Dockerfile
. We won’t use NGINX Docker image anymore and change it to alpine image to install dependencies, download NGINX archive and build it withngx_brotli
module.FROM node:lts-alpine as frontend-build WORKDIR /app COPY package*.json ./ RUN npm install COPY . . RUN npm run build FROM alpine:latest RUN apk add --update --no-cache build-base git pcre-dev openssl-dev zlib-dev linux-headers \ && wget https://nginx.org/download/nginx-1.23.2.tar.gz \ && tar zxf nginx-1.23.2.tar.gz \ && git clone https://github.com/google/ngx_brotli.git --recursive \ && cd ../nginx-1.23.2 \ && ./configure \ --with-compat \ --prefix=/usr/share/nginx \ --sbin-path=/usr/local/sbin/nginx \ --conf-path=/etc/nginx/nginx.conf \ --pid-path=/run \ --add-dynamic-module=../ngx_brotli \ && make modules \ && make install COPY --from=frontend-build /app/dist /usr/share/nginx/html COPY nginx.conf /etc/nginx/nginx.conf CMD ["nginx", "-g", "daemon off;"]
-
Edit
nginx.conf
.load_module modules/ngx_http_brotli_filter_module.so; load_module modules/ngx_http_brotli_static_module.so; user nobody; worker_processes auto; error_log /dev/stderr; pid /run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /dev/stdout; sendfile on; keepalive_timeout 65; brotli on; brotli_types application/javascript image/svg+xml text/css; brotli_min_length 100; gzip on; gzip_types application/javascript image/svg+xml text/css; gzip_min_length 100; server { listen 80; location / { root /usr/share/nginx/html; index index.html index.htm; } } }
At the top of the file we added two import statement with paths to external Brotli modules. The Brotli module uses similar directives as gzip encoding module:
brotli
to switch on Brotli compressingbrotli_types
to specify MIME types for files to compressbrotli_min_length
to set the minimum length of a response that will be compressed
We will leave gzip directives as fallback, despite the Brotli encoding is supported by almost all modern browsers.
-
Delete the previous container, rebuild and run the updated one.
docker rm nginx-frontend docker build -t nginx-frontend . docker run -p 80:80/tcp --name nginx-frontend nginx-frontend
-
Test the size of obtained index page compressed with Brotli module.
curl --compressed -I http://localhost
With the response header
Content-Encoding
value, you will see, that the Brotli enconding is now applied.HTTP/1.1 200 OK Server: nginx/1.23.2 Content-Type: text/html Connection: keep-alive Content-Encoding: br
-
Test the size of Brotli compressed assets.
curl --compressed -so /dev/null http://localhost -w '%{size_download} bytes\n'
The result is a little bit better than gzip encoding. 205 bytes against 300 bytes.
205 bytes
For the JavaScript asset file Brotli compressing is saving more than 100 bytes.
curl --compressed -so /dev/null http://localhost/assets/index.acb3e620.js -w '%{size_download} bytes\n'
634 bytes
Brotli compressing has the same cons as gzip encoding with the increasing workload
for on the fly actions, but also can be optimized with static modules and precompressed
application files. It is also advised to pay attention to
brotli_comp_level
directive
to set the level of compression.
Synopsis
The compression is a flexible tool to improve performance of your application, that can be configured for your needs and capabilities. It doesn’t require many lines of code or complex configuring to reduce the size of transferred data and to improve load time for clients of your application.