Next Js Performance

Posted on  by admin

Build Your Own

Building React applications with NextJS is a great way of getting things in front of customers quickly. But you might find that they aren't using your site because it's too slow.

Custom metrics

Scripts

Here's a list of things you might be able to do to improve the performance of your NextJS application.

I've geared this towards NextJS for a few specifics I wanted to include, but this list can be applied more generally too.

Frontends are entirely cached on CDNs wherever possible ("Jamstacked" https://jamstack.org/) . Where that’s not possible, pages are constructed at build time or on the server using Incremental Static Regeneration (https://www.smashingmagazine.com/2021/04/incremental-static-regeneration-nextjs/).

Other strategies for mitigating the cold start issue

Make use of module replacement strategies in links to internal routes (https://nextjs.org/docs/api-reference/next/link ).

Images are fetched either on build or on request from a CDN . Images are fetched at the correct dimensions and most performant formats (https://ericportis.com/posts/2014/srcset-sizes/ ).

High priority images (those in the viewport when the page is opened) use responsive preload (https://www.bronco.co.uk/our-ideas/using-relpreload-for-responsive-images/ ).

Feature Preview: near instant bundling

Low priority images are downloaded asynchronously using loading="lazy". Make use of application image components where possible (https://nextjs.org/docs/api-reference/next/image ).

Don’t use css-in-js (https://pustelto.com/blog/css-vs-css-in-js-perf/ ). Only used styles are sent to the client (https://markmurray.co/blog/tree-shaking-css-modules/). If using css-in-js try to make css as static as possible (https://itnext.io/how-to-increase-css-in-js-performance-by-175x-f30ddeac6bce).

Fonts

CSS is minified. Use font substitution (https://developer.mozilla.org/en-US/docs/Web/CSS/@font-face/font-display ). Use fonts from a CDN.

Web Vitals

Download only necessary fonts. Subset fonts where possible (https://developers.google.com/fonts/docs/getting_started#specifying_script_subsets ). Only interactive elements are hydrated on the client (https://medium.com/@luke_schmuke/how-we-achieved-the-best-web-performance-with-partial-hydration-20fab9c808d5).

Only used JavaScript is sent to the client (https://web.dev/codelab-remove-unused-code/, https://developers.google.com/web/fundamentals/performance/optimizing-javascript/tree-shaking).

Consider using Preact instead of React (https://dev.to/dlw/next-js-replace-react-with-preact-2i72). JavaScript is minified.

Applications

Scripts are compressed using GZip (good)Brotli (better). Brotli (better). JavaScript bundles are split to allow for effective download & parsing. Only essential JavaScript is blocking.

Use web workers for memory intensive operations.

  • Use more performant libraries (or use native JavaScript) where possible (eg.
  • Lodash vs Underscore, Temporal API vs Moment).
  • Only fetch data you need (consider using GraphQL).

Testing Strategy

No API chaining (consider using GraphQL). Minimise data normalisation (offload to a standalone function or backend). Third party scripts are non-blocking (https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/loading-third-party-javascript ).

Use resource hinting to parallelise downloads (https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/loading-third-party-javascript ). UI placeholders are used for loading states.

Loss of connectivity results in a notification and showing the previous state (https://www.apollographql.com/docs/react/data/queries/#previousdata ). Completed action states are shown when data has been sent not received by the server.

Prevent jumping content / layout shift.

  • Reduce DNS resolution & SSL negotiation time where possible (https://zoompf.com/blog/2014/12/optimizing-tls-handshake/ ).
  • PRs that degrade performance are identified in the pipeline.
  • Frontend performance is measured (https://speedcurve.com/ ) .

Feeling a little Rusty

Frontend performance is regularly analysed. Analysis is turned into actionable backlog items. There are two benefits to implementing as many of these as you can: conversions will likely improve because more users can use your app.

And also you will save your own costs: less downloads, less bandwith and where you can cache from origin, you'll save on infrastructure costs. I'm sure this list isn't quite complete, let me know if there's anything I've missed!

Next.js Analytics allows you to analyze and measure the performance ofpages using different metrics.

You can start collecting your Real Experience Score with zero-configuration on Vercel deployments.

Feature preview: React Server Components

There's also support for Analytics if you're self-hosting.

The rest of this documentation describes the built-in relayer Next.js Analytics uses.

First, you will need to create a custom App component and define a reportWebVitals function:.

  • This function is fired when the final values for any of the metrics have finished calculating onthe page.
  • You can use to log any of the results to the console or send to a particular endpoint.
  • The metric object returned to the function consists of a number of properties:.

id: Unique identifier for the metric in the context of the current page load.

name: Metric name. startTime: First recorded timestamp of the performance entry in milliseconds (if applicable).

value: Value, or duration in milliseconds, of the performance entry. label: Type of metric (web-vital or custom).

There are two types of metrics that are tracked:. Web Vitals are a set of useful metrics that aim to capture the userexperience of a web page.

The following web vitals are all included:. Time to First Byte (TTFB). First Contentful Paint (FCP). Largest Contentful Paint (LCP). First Input Delay (FID).

Data

Cumulative Layout Shift (CLS). You can handle all the results of these metrics using the web-vital label:. There's also the option of handling each of the metrics separately:. A third-party library, web-vitals, is used to measurethese metrics.

Browser compatibility depends on the particular metric, so refer to the BrowserSupport section to find out whichbrowsers are supported. In addition to the core metrics listed above, there are some additional custom metrics thatmeasure the time it takes for the page to hydrate and render:.

Next.js-hydration: Length of time it takes for the page to start and finish hydrating (in ms). Next.js-route-change-to-render: Length of time it takes for a page to start rendering after aroute change (in ms).

Perceived performance

Next.js-render: Length of time it takes for a page to finish render after a route change (in ms).

You can handle all the results of these metrics using the custom label:.

There's also the option of handling each of the metrics separately:.

  • These metrics work in all browsers that support the User Timing API.
  • With the relay function, you can send any of results to an analytics endpoint to measure and trackreal user performance on your site.

Note: If you use Google Analytics, using theid value can allow you to construct metric distributions manually (to calculate percentiles,etc.).

Read more about sending results to Google Analytics. If you are using TypeScript, you can use the built-in type NextWebVitalsMetric:.

TL;DR: Next.js 9.3 introducesgetStaticPaths, which allows you to generate a data-driven list of pages to render at build time, potentially allowing you to bypass server-side rendering for some use cases.

You can now also use the fallback property to dynamically build pages on request, and serve the generated html instead. On a recent project we built a website for a client using a combination of Next.js and Contentful headless CMS.

The goal of the website was to offer a responsive experience across all devices whilst keeping load times to a minimum and supporting SEO.

  1. I rather like Next.js – it combines the benefits of React with Server Side Rendering (SSR) and static html builds, enabling caching for quick initial page loads and SEO support.
  2. Once the cached SSR page has been downloaded, Next.js “hydrates” the page with React and all of the page components, completely seamlessly to the user.
  3. The website is deployed to AWS using CloudFront and [email protected] as our CDN and SSR platform.

Choose the right rendering mode

It works by executing a lambda for Origin Requests and caching the results in CloudFront. Regardless of where the page is rendered (client or server) Next.js runs the same code which in our case queries Contentful for content to display on the page, which is neat as the same code handles both scenarios.

During testing, we noticed that page requests that weren’t cached in CloudFront could take anything up to 10 seconds to render. Although this only affects requests that miss the cache, this wasn’t acceptable to us as it impacts every page that needs to be server-side generated, and the issue would also be replicated for every edge location in CloudFront.

This issue only affects the first page load of a visitors session however, as subsequent requests are handled client-side and only the new page content and assets are downloaded. Whilst investigating the issue we spotted that the majority of processing time was spent in the lambda.

We added extra logging to output the elapsed time at various points in the lambda, and then created custom CloudWatch metrics from these to identify where most of the time was incurred.

Dynamically load client-side code to reduce first load JavaScript

We identified that the additional overhead was caused by javascript requiring the specific page’s javascript file embedded within the lambda, which is dynamically loaded for the page requested.

It’s dynamically loaded to avoid loading all page assets when only rendering a single page, which would add considerable and unnecessary startup time to the lambda.

The lambda we used was based on the Next.js plugin available for the serverless framework, but as we were using Terraform we took the bits we needed from here to make it work (README.md). Due to the overhead from the require statement, we experimented with the resource allocation given to the lambda.

Images

Debugging React applications can be difficult, especially when users experience issues that are hard to reproduce. If you’re interested in monitoring and tracking Redux state, automatically surfacing JavaScript errors, and tracking slow network requests and component load time, try LogRocket.

LogRocket is like a DVR for web and mobile apps, recording literally everything that happens on your React app. Instead of guessing why problems happen, you can aggregate and report on what state your application was in when an issue occurred. LogRocket also monitors your app's performance, reporting with metrics like client CPU load, client memory usage, and more.

The LogRocket Redux middleware package adds an extra layer of visibility into your user sessions. LogRocket logs all actions and state from your Redux stores.

Modernize how you debug your React apps — start monitoring for free.