An open API service indexing awesome lists of open source software.

https://github.com/theninthsky/client-side-rendering

A case study of CSR.
https://github.com/theninthsky/client-side-rendering

client-side-rendering csr nextjs performance seo server-side-rendering ssg ssr stale-while-revalidate static-site-generation swr

Last synced: about 1 month ago
JSON representation

A case study of CSR.

Awesome Lists containing this project

README

        

Client-side Rendering

This project serves as a case study on CSR, examining the potential of client-side rendered apps in comparison to server-side rendering.

For a detailed comparison of all rendering methods, visit the project's _Comparison_ page: https://client-side-rendering.pages.dev/comparison

The findings of this project resulted in the creation of [Adina](https://adinajs.com).

## Table of Contents

- [Intro](#intro)
- [Motivation](#motivation)
- [Performance](#performance)
- [Bundle Size](#bundle-size)
- [Caching](#caching)
- [Code Splitting](#code-splitting)
- [Preloading Async Pages](#preloading-async-pages)
- [Splitting Async Vendors](#splitting-async-vendors)
- [Preloading Data](#preloading-data)
- [Preloading Next Pages Data](#preloading-next-pages-data)
- [Precaching](#precaching)
- [Adaptive Source Inlining](#adaptive-source-inlining)
- [Leveraging the 304 Status Code](#leveraging-the-304-status-code)
- [Navigation Preload](#navigation-preload)
- [Tweaking Further](#tweaking-further)
- [Transitioning Async Pages](#transitioning-async-pages)
- [Revalidating Active Apps](#revalidating-active-apps)
- [Summary](#summary)
- [Deploying](#deploying)
- [Benchmark](#benchmark)
- [Areas for Improvement](#areas-for-improvement)
- [SEO](#seo)
- [Indexing](#indexing)
- [Google](#google)
- [Prerendering](#prerendering)
- [Social Media Share Previews](#social-media-share-previews)
- [Sitemaps](#sitemaps)
- [CSR vs. SSR](#csr-vs-ssr)
- [SSR Disadvantages](#ssr-disadvantages)
- [Why Not SSG?](#why-not-ssg)
- [The Cost of Hydration](#the-cost-of-hydration)
- [Conclusion](#conclusion)
- [What Might Change in the Future](#what-might-change-in-the-future)

# Intro

**Client-side rendering (CSR)** refers to sending static assets to the web browser and allowing it to handle the entire rendering process of the app.
**Server-side rendering (SSR)** involves rendering the entire app (or page) on the server and delivering a pre-rendered HTML document ready for display.
**Static Site Generation (SSG)** is the process of pre-generating HTML pages as static assets, which are then sent and displayed by the browser.

Contrary to common belief, the SSR process in modern frameworks like **React**, **Angular**, **Vue**, and **Svelte** results in the app rendering twice: once on the server and again on the browser (this is known as "hydration"). Without this second render, the app would be static and uninteractive, essentially behaving like a "lifeless" web page.


Interestingly, the hydration process does not appear to be faster than a typical render (excluding the painting phase, of course).


It's also important to note that SSG apps must undergo hydration as well.

In both SSR and SSG, the HTML document is fully constructed, providing the following benefits:

- Web crawlers can index the pages out-of-the-box, which is crucial for SEO.
- The first contentful paint (FCP) is usually very fast (although in SSR, this depends heavily on API server response times).

On the other hand, CSR apps offer the following advantages:

- The app is completely decoupled from the server, meaning it loads independently of the API server's response times, enabling smooth page transitions.
- The developer experience is streamlined, as there's no need to worry about which parts of the code run on the server and which run in the browser.

In this case study, we'll focus on CSR and explore ways to overcome its apparent limitations while leveraging its strengths to the peak.

All optimizations will be incorporated into the deployed app, which can be found here: [https://client-side-rendering.pages.dev](https://client-side-rendering.pages.dev).

# Motivation

_"Recently, SSR (Server Side Rendering) has taken the JavaScript front-end world by storm. The fact that you can now render your sites and apps on the server before sending them to your clients is an absolutely **revolutionary** idea (and totally not what everyone was doing before JS client-side apps got popular in the first place...)._

_However, the same criticisms that were valid for PHP, ASP, JSP, (and such) sites are valid for server-side rendering today. It's slow, breaks fairly easily, and is difficult to implement properly._

_Thing is, despite what everyone might be telling you, you probably don't need SSR. You can get almost all the advantages of it (without the disadvantages) by using prerendering."_

_~[Prerender SPA Plugin](https://github.com/chrisvfritz/prerender-spa-plugin#what-is-prerendering)_

In recent years, server-side rendering has gained significant popularity in the form of frameworks such as _[Next.js](https://nextjs.org)_ and _[Remix](https://remix.run)_ to the point that developers often default to using them without fully understanding their limitations, even in apps that don't need SEO (e.g., those with login requirements).


While SSR has its advantages, these frameworks continue to emphasize their speed ("Performance as a default"), suggesting that client-side rendering (CSR) is inherently slow.


Additionally, there is a widespread misconception that perfect SEO can only be achieved with SSR, and that CSR apps cannot be optimized for search engine crawlers.

Another common argument for SSR is that as web apps grow larger, their loading times will continue to increase, leading to poor _[FCP](https://web.dev/fcp)_ performance for CSR apps.

While it’s true that apps are becoming more feature-rich, the size of a single page should actually **decrease** over time.


This is due to the trend of creating smaller, more efficient versions of libraries and frameworks, such as _zustand_, _day.js_, _headless-ui_, and _react-router v6_.


We can also observe a reduction in the size of frameworks over time: Angular (74.1kb), React (44.5kb), Vue (34kb), Solid (7.6kb), and Svelte (1.7kb).


These libraries contribute significantly to the overall weight of a web page’s scripts.


With proper code-splitting, the initial loading time of a page could **decrease** over time.

This project implements a basic CSR app with optimizations like code-splitting and preloading. The goal is for the loading time of individual pages to remain stable as the app scales.


The objective is to simulate a production-grade app's package structure and minimize loading times through parallelized requests.

It’s important to note that improving performance should not come at the cost of developer experience. Therefore, the architecture of this project will be only slightly modified from a typical React setup, avoiding the rigid, opinionated structure of frameworks like Next.js, or the limitations of SSR in general.

This case study will focus on two main aspects: performance and SEO. We will explore how to achieve top scores in both areas.

_Note that although this project is implemented using React, most of the optimizations are framework-agnostic and are purely based on the bundler and the web browser._

# Performance

We will assume a standard Webpack (Rspack) setup and add the required customizations as we progress.

## Bundle Size

The first rule of thumb is to minimize dependencies and, among those, choose the ones with the smallest file sizes.

For example:


We can use _[day.js](https://www.npmjs.com/package/dayjs)_ instead of _[moment](https://www.npmjs.com/package/moment)_, _[zustand](https://www.npmjs.com/package/zustand)_ instead of _[redux toolkit](https://www.npmjs.com/package/@reduxjs/toolkit)_ , etc.

This is important not only for CSR apps but also for SSR (and SSG) apps, as larger bundles result in longer load times, delaying when the page becomes visible or interactive.

## Caching

Ideally, every hashed file should be cached, and `index.html` should **never** be cached.


It means that the browser would initially cache `main.[hash].js` and would have to redownload it only if its hash (content) changes:

![Network Bundled](images/network-bundled.png)

However, since `main.js` includes the entire bundle, the slightest change in code would cause its cache to expire, meaning the browser would have to download it again.


Now, what part of our bundle comprises most of its weight? The answer is the **dependencies**, also called **vendors**.

So if we could split the vendors to their own hashed chunk, that would allow a separation between our code and the vendors code, leading to less cache invalidations.

Let's add the following _optimization_ to our config file:

_[rspack.config.js](rspack.config.js)_

```js
export default () => {
return {
optimization: {
runtimeChunk: 'single',
splitChunks: {
chunks: 'initial',
cacheGroups: {
vendors: {
test: /[\\/]node_modules[\\/]/,
name: 'vendors'
}
}
}
}
}
}
```

This will create a `vendors.[hash].js` file:

![Network Vendors](images/network-vendors.png)

Although this is a substantial improvement, what would happen if we updated a very small dependency?


In such case, the entire vendors chunk's cache will invalidate.

So, in order to improve it even further, we will split **each dependency** to its own hashed chunk:

_[rspack.config.js](rspack.config.js)_

```diff
- name: 'vendors'
+ name: module => {
+ const moduleName = (module.context.match(/[\\/]node_modules[\\/](.*?)([\\/]|$)/) || [])[1]
+
+ return moduleName.replace('@', '')
+ }
```

This will create files like `react-dom.[hash].js` which contain a single big vendor and a `[id].[hash].js` file which contains all the remaining (small) vendors:

![Network Split Vendors](images/network-split-vendors.png)

More info about the default configurations (such as the split threshold size) can be found here:


https://webpack.js.org/plugins/split-chunks-plugin/#defaults

## Code Splitting

A lot of the features we write end up being used only in a few of our pages, so we would like them to be loaded only when the user visits the page they are being used in.

For Example, we wouldn't want users to have to wait until the _[react-big-calendar](https://www.npmjs.com/package/react-big-calendar)_ package is downloaded, parsed and executed if they merely loaded the _Home_ page. We would only want that to happen when they visit the _Calendar_ page.

The way we can achieve this is (preferably) by route-based code splitting:

_[App.tsx](src/App.tsx)_

```js
const Home = lazy(() => import(/* webpackChunkName: 'home' */ 'pages/Home'))
const LoremIpsum = lazy(() => import(/* webpackChunkName: 'lorem-ipsum' */ 'pages/LoremIpsum'))
const Pokemon = lazy(() => import(/* webpackChunkName: 'pokemon' */ 'pages/Pokemon'))
```

So when users visit the _Pokemon_ page, they only download the main chunk scripts (which includes all shared dependencies such as the framework) and the `pokemon.[hash].js` chunk.

_Note: it is encouraged to download the entire app so that users will experience instant, app-like, navigations. But it is a bad idea to batch all assets into a single script, delaying the first render of the page.


These assets should be downloaded asynchronously and only after the user-requested page has finished rendering and is entirely visible._

## Preloading Async Pages

Code splitting has one major flaw - the runtime doesn't know which async chunks are needed until the main script executes, leading to them being fetched in a significant delay (since they make another round-trip to the CDN):

![Network Code Splitting](images/network-code-splitting.png)

The way we can solve this issue is by writing a custom plugin that will embed a script in the document which will be responsible of preloading relevant assets:

_[rspack.config.js](rspack.config.js)_

```js
import InjectAssetsPlugin from './scripts/inject-assets-plugin.js'

export default () => {
return {
plugins: [new InjectAssetsPlugin()]
}
}
```

_[scripts/inject-assets-plugin.js](scripts/inject-assets-plugin.js)_

```js
import { join } from 'node:path'
import { readFileSync } from 'node:fs'
import HtmlPlugin from 'html-webpack-plugin'

import pagesManifest from '../src/pages.js'

const __dirname = import.meta.dirname

const getPages = rawAssets => {
const pages = Object.entries(pagesManifest).map(([chunk, { path, title }]) => {
const script = rawAssets.find(name => name.includes(`/${chunk}.`) && name.endsWith('.js'))

return { path, script, title }
})

return pages
}

class InjectAssetsPlugin {
apply(compiler) {
compiler.hooks.compilation.tap('InjectAssetsPlugin', compilation => {
HtmlPlugin.getCompilationHooks(compilation).beforeEmit.tapAsync('InjectAssetsPlugin', (data, callback) => {
const preloadAssets = readFileSync(join(__dirname, '..', 'scripts', 'preload-assets.js'), 'utf-8')

const rawAssets = compilation.getAssets()
const pages = getPages(rawAssets)

let { html } = data

html = html.replace(
'',
() => `const pages=${stringifiedPages}\n${preloadAssets}`
)

callback(null, { ...data, html })
})
})
}
}

export default InjectAssetsPlugin
```

_[scripts/preload-assets.js](scripts/preload-assets.js)_

```js
const getPathname = () => {
let { pathname } = window.location

if (pathname !== '/') pathname = pathname.replace(/\/$/, '')

return pathname
}

const getPage = (pathname = getPathname()) => {
const potentiallyMatchingPages = pages
.map(page => ({ ...isMatch(pathname, page.path), ...page }))
.filter(({ match }) => match)

return potentiallyMatchingPages.find(({ exactMatch }) => exactMatch) || potentiallyMatchingPages[0]
}

const isMatch = (pathname, path) => {
if (pathname === path) return { exactMatch: true, match: true }
if (!path.includes(':')) return { match: false }

const pathnameParts = pathname.split('/')
const pathParts = path.split('/')
const match = pathnameParts.every((part, ind) => part === pathParts[ind] || pathParts[ind]?.startsWith(':'))

return {
match,
exactMatch: match && pathnameParts.length === pathParts.length
}
}

const preloadScript = script => {
document.head.appendChild(
Object.assign(document.createElement('link'), { rel: 'preload', href: '/' + script, as: 'script' })
)
}

const currentPage = getPage()

if (currentPage) {
const { path, title, script } = currentPage

preloadScript(script)

if (title) document.title = title
}
```

The imported `pages.js` file can be found [here](src/pages.js).

This way, the browser is able to fetch the page-specific script chunk **in parallel** with render-critical assets:

![Network Async Chunks Preload](images/network-async-chunks-preload.png)

## Splitting Async Vendors

Code splitting introduces another problem: async vendor duplication.

Say we have two async chunks: `lorem-ipsum.[hash].js` and `pokemon.[hash].js`.
If they both include the same dependency that is not part of the main chunk, that means the user will download that dependency **twice**.

So if that said dependency is `moment` and it weighs 72kb minzipped, then both async chunk's size will be **at least** 72kb.

We need to split this dependency from these async chunks so that it could be shared between them:

_[rspack.config.js](rspack.config.js)_

```diff
optimization: {
runtimeChunk: 'single',
splitChunks: {
chunks: 'initial',
cacheGroups: {
vendors: {
test: /[\\/]node_modules[\\/]/,
+ chunks: 'all',
name: ({ context }) => (context.match(/[\\/]node_modules[\\/](.*?)([\\/]|$)/) || [])[1].replace('@', '')
}
}
}
}
```

Now both `lorem-ipsum.[hash].js` and `pokemon.[hash].js` will use the extracted `moment.[hash].js` chunk, sparing the user a lot of network traffic (and giving these assets better cache persistence).

However, we have no way of telling which async vendor chunks will be split before we build the application, so we wouldn't know which async vendor chunks we need to preload (refer to the "Preloading Async Chunks" section):

![Network Split Async Vendors](images/network-split-async-vendors.png)

That's why we will append the chunks names to the async vendor's name:

_[rspack.config.js](rspack.config.js)_

```diff
optimization: {
runtimeChunk: 'single',
splitChunks: {
chunks: 'initial',
cacheGroups: {
vendors: {
test: /[\\/]node_modules[\\/]/,
chunks: 'all',
- name: ({ context }) => (context.match(/[\\/]node_modules[\\/](.*?)([\\/]|$)/) || [])[1].replace('@', '')
+ name: (module, chunks) => {
+ const allChunksNames = chunks.map(({ name }) => name).join('.')
+ const moduleName = (module.context.match(/[\\/]node_modules[\\/](.*?)([\\/]|$)/) || [])[1]

+ return `${moduleName}.${allChunksNames}`.replace('@', '')
}
}
}
}
}
```

_[scripts/inject-assets-plugin.js](scripts/inject-assets-plugin.js)_

```diff
const getPages = rawAssets => {
const pages = Object.entries(pagesManifest).map(([chunk, { path, title }]) => {
- const script = rawAssets.find(name => name.includes(`/${chunk}.`) && name.endsWith('.js'))
+ const scripts = rawAssets.filter(name => new RegExp(`[/.]${chunk}\\.(.+)\\.js$`).test(name))

- return { path, title, script }
+ return { path, title, scripts }
})

return pages
}
```

_[scripts/preload-assets.js](scripts/preload-assets.js)_

```diff
- const preloadScript = script => {
+ const preloadScripts = scripts => {
+ scripts.forEach(script => {
document.head.appendChild(
Object.assign(document.createElement('link'), { rel: 'preload', href: '/' + script, as: 'script' })
)
+ })
}
.
.
.
if (currentPage) {
- const { path, title, script } = currentPage
+ const { path, title, scripts } = currentPage

- preloadScript(currentPage)
+ preloadScripts(currentPage)

if (title) document.title = title
}
```

Now all async vendor chunks will be fetched in parallel with their parent async chunk:

![Network Split Async Vendors Preload](images/network-split-async-vendors-preload.png)

## Preloading Data

One of the presumed disadvantages of CSR over SSR is that the page's data (fetch requests) will be fired only after JS has been downloaded, parsed and executed in the browser:

![Network Data](images/network-data.png)

To overcome this, we will use preloading once again, this time for the data itself, by patching the `fetch` API:

_[scripts/inject-assets-plugin.js](scripts/inject-assets-plugin.js)_

```diff
const getPages = rawAssets => {
- const pages = Object.entries(pagesManifest).map(([chunk, { path, title }]) => {
+ const pages = Object.entries(pagesManifest).map(([chunk, { path, title, data }]) => {
const scripts = rawAssets.filter(name => new RegExp(`[/.]${chunk}\\.(.+)\\.js$`).test(name))

- return { path, title, script }
+ return { path, title, scripts, data }
})

return pages
}

HtmlPlugin.getCompilationHooks(compilation).beforeEmit.tapAsync('InjectAssetsPlugin', (data, callback) => {
const preloadAssets = readFileSync(join(__dirname, '..', 'scripts', 'preload-assets.js'), 'utf-8')

const rawAssets = compilation.getAssets()
const pages = getPages(rawAssets)
+ const stringifiedPages = JSON.stringify(pages, (_, value) => {
+ return typeof value === 'function' ? `func:${value.toString()}` : value
+ })

let { html } = data

html = html.replace(
'',
- () => `const pages=${JSON.stringify(pages)}\n${preloadAssets}`
+ () => `const pages=${stringifiedPages}\n${preloadAssets}`
)

callback(null, { ...data, html })
})
```

_[scripts/preload-assets.js](scripts/preload-assets.js)_

```js
const preloadResponses = {}

const originalFetch = window.fetch

window.fetch = async (input, options) => {
const requestID = `${input.toString()}${options?.body?.toString() || ''}`
const preloadResponse = preloadResponses[requestID]

if (preloadResponse) {
if (!options?.preload) delete preloadResponses[requestID]

return preloadResponse
}

const response = originalFetch(input, options)

if (options?.preload) preloadResponses[requestID] = response

return response
}
.
.
.
const preloadData = ({ pathname = getPathname(), path, data }) => {
data.forEach(({ url, preconnect, ...request }) => {
if (url.startsWith('func:')) url = eval(url.replace('func:', ''))

const constructedURL = typeof url === 'string' ? url : url(getDynamicProperties(pathname, path))

fetch(constructedURL, { ...request, preload: true })

preconnect?.forEach(url => {
document.head.appendChild(Object.assign(document.createElement('link'), { rel: 'preconnect', href: url }))
})
})
}

const getDynamicProperties = (pathname, path) => {
const pathParts = path.split('/')
const pathnameParts = pathname.split('/')
const dynamicProperties = {}

for (let i = 0; i < pathParts.length; i++) {
if (pathParts[i].startsWith(':')) dynamicProperties[pathParts[i].slice(1)] = pathnameParts[i]
}

return dynamicProperties
}

const currentPage = getPage()

if (currentPage) {
const { path, title, scripts, data } = currentPage

preloadScripts(scripts)

if (data) preloadData({ path, data })
if (title) document.title = title
}
```

Reminder: the `pages.js` file can be found [here](src/pages.js).

Now we can see that the data is being fetched right away:

![Network Data Preload](images/network-data-preload.png)

The above script will even preload dynamic routes data (such as _[pokemon/:name](https://client-side-rendering.pages.dev/pokemon/pikachu)_).

## Preloading Next Pages Data

We can conveniently preload page data by hovering over links or clicking them, before they are fully rendered:

_[src/utils/data-preload.ts](src/utils/data-preload.ts)_

```ts
declare function getPage(path: string): Page | undefined

declare function preloadData(page: Page): void

export enum DataType {
Static = 'static',
Dynamic = 'dynamic'
}

type Page = {
pathname: string
title?: string
data?: Request[]
}

type Request = RequestInit & {
url: string
static?: boolean
preconnect?: string[]
}

type Events = {
[event: string]: DataType
}

type EventHandlers = {
[event: string]: () => void
}

const defaultEvents: Events = {
onMouseEnter: DataType.Static,
onTouchStart: DataType.Static,
onMouseDown: DataType.Dynamic,
onClick: DataType.Dynamic
}

export const getDataPreloadHandlers = (pathname: string, events: Events = defaultEvents) => {
const handlers: EventHandlers = {}
const page = getPage(pathname)
const { data } = page || {}

if (!data) return handlers

const staticData = data.filter(data => data.static)
const dynamicData = data.filter(data => !data.static)

for (const event in events) {
const relevantData = events[event] === DataType.Static ? staticData : dynamicData

if (relevantData.length) {
handlers[event] = () => {
preloadData({ ...page, pathname, data: relevantData })
delete handlers[event]
}
}
}

return handlers
}
```

By default, `getDataPreloadHandlers` returns event listeners that preload static data when a link is hovered (on desktop) or touched (on mobile), and preload database-dependent data when a link is pressed (on desktop) or fully clicked (on mobile).

## Precaching

Users should have a smooth navigation experience in our app.


However, splitting every page causes a noticeable delay in navigation, since every page has to be downloaded (on-demand) before it can be rendered on screen.

We would want to prefetch and cache all pages ahead of time.

We can do this by writing a simple service worker:

_[rspack.config.js](rspack.config.js)_

```js
import { InjectManifestPlugin } from 'inject-manifest-plugin'

import InjectAssetsPlugin from './scripts/inject-assets-plugin.js'

export default () => {
return {
plugins: [
new InjectManifest({
include: [/fonts\//, /scripts\/.+\.js$/],
swSrc: join(__dirname, 'public', 'service-worker.js'),
compileSrc: false,
maximumFileSizeToCacheInBytes: 10000000
}),
new InjectAssetsPlugin()
]
}
}
```

_[src/utils/service-worker-registration.ts](src/utils/service-worker-registration.ts)_

```js
const register = () => {
window.addEventListener('load', async () => {
try {
await navigator.serviceWorker.register('/service-worker.js')

console.log('Service worker registered!')
} catch (err) {
console.error(err)
}
})
}

const unregister = async () => {
try {
const registration = await navigator.serviceWorker.ready

await registration.unregister()

console.log('Service worker unregistered!')
} catch (err) {
console.error(err)
}
}

if ('serviceWorker' in navigator) {
const shouldRegister = process.env.NODE_ENV !== 'development'

if (shouldRegister) register()
else unregister()
}
```

_[public/service-worker.js](public/service-worker.js)_

```js
const CACHE_NAME = 'my-csr-app'

const allAssets = self.__WB_MANIFEST.map(({ url }) => url)

const getCache = () => caches.open(CACHE_NAME)

const getCachedAssets = async cache => {
const keys = await cache.keys()

return keys.map(({ url }) => `/${url.replace(self.registration.scope, '')}`)
}

const precacheAssets = async () => {
const cache = await getCache()
const cachedAssets = await getCachedAssets(cache)
const assetsToPrecache = allAssets.filter(asset => !cachedAssets.includes(asset) && !ignoreAssets.includes(asset))

await cache.addAll(assetsToPrecache)
await removeUnusedAssets()
}

const removeUnusedAssets = async () => {
const cache = await getCache()
const cachedAssets = await getCachedAssets(cache)

cachedAssets.forEach(asset => {
if (!allAssets.includes(asset)) cache.delete(asset)
})
}

const fetchAsset = async request => {
const cache = await getCache()
const cachedResponse = await cache.match(request)

return cachedResponse || fetch(request)
}

self.addEventListener('install', event => {
event.waitUntil(precacheAssets())
self.skipWaiting()
})

self.addEventListener('fetch', event => {
const { request } = event

if (['font', 'script'].includes(request.destination)) event.respondWith(fetchAsset(request))
})
```

Now all pages will be prefetched and cached even before the user tries to navigate to them.

This approach will also generate a full _[code cache](https://v8.dev/blog/code-caching-for-devs#use-service-worker-caches)_.

## Adaptive Source Inlining

When inspecting our 43kb `react-dom.js` file, we can see that the time it took for the request to return was 60ms while the time it took to download the file was 3ms:

![CDN Response Time](images/cdn-response-time.png)

This demonstrates the well-known fact that [RTT](https://en.wikipedia.org/wiki/Round-trip_delay) has a huge impact on web pages load times, sometimes even more than download speed, and even when assets are served from a nearby CDN edge like in our case.

Additionally and more importantly, we can see that after the HTML file is downloaded, we have a large timespan where the browser stays idle and just waits for the scripts to arrive:

![Browser Idle Period](images/browser-idle-period.png)

This is a lot of precious time (marked in red) that the browser could use to download, parse and even execute scripts, speeding up the page's visibility and interactivity.


This inefficiency will reoccur every time assets change (partial cache). This isn't something that only happens on the very first visit.

So how can we eliminate this idle time?


We could inline all the initial (critical) scripts in the document, so that they will start to download, parse and execute until the async page assets arrive:

![Inline Initial Scripts](images/inline-initial-scripts.png)

We can see that the browser now gets its initial scripts without having to send another request to the CDN.


So the browser will first send requests for the async chunks and the preloaded data, and while these are pending, it will continue to download and execute the main scripts.


We can see that the async chunks start to download (marked in blue) right after the HTML file finishes downloading, parsing and executing, which saves a lot of time.

While this change is making a significant difference on fast networks, it is even more crucial for slower networks, where the delay is larger and the RTT is much more impactful.

However, this solution has 2 major issues:

1. We wouldn’t want users to download the 100kb+ HTML file every time they visit our app. We only want that to happen for the very first visit.
2. Since we do not inline the async page’s assets as well, we would probably still be waiting for them to fetch even after the entire HTML has finished downloading, parsing and executing.

To overcome these issues, we can no longer stick to a static HTML file, and so we shall leaverage the power of a server. Or, more precisely, the power of a Cloudflare serverless worker.


This worker should intercept every HTML document request and tailor a response that fits it perfectly.

The entire flow should be described as follows:

1. The browser sends an HTML document request to the Clouflare worker.
2. The Clouflare worker checks for the existence of an `X-Cached` header in the request.
If such header exists, it will iterate over its values and inline only the relevant* assets that are absent from it in the response.
If such header doesn't exist, it will inline all the relevant* assets in the response.
3. The app will then extract all of the inlined assets, cache them in a service worker and then precache all of the other assets.
4. The next time the page is reloaded, the service worker will send the HTML document along with an `X-Cached` header specifying all of its cached assets.

\* Both initial and page-specific assets.

This ensures that the browser receives exactly the assets it needs (no more, no less) to display the current page **in a single roundtrip**!

_[scripts/inject-assets-plugin.js](scripts/inject-assets-plugin.js)_

```js
class InjectAssetsPlugin {
apply(compiler) {
const production = compiler.options.mode === 'production'

compiler.hooks.compilation.tap('InjectAssetsPlugin', compilation => {
.
.
.
})

if (!production) return

compiler.hooks.afterEmit.tapAsync('InjectAssetsPlugin', (compilation, callback) => {
let worker = readFileSync(join(__dirname, '..', 'build', '_worker.js'), 'utf-8')
let html = readFileSync(join(__dirname, '..', 'build', 'index.html'), 'utf-8')

html = html
.replace(/type=\"module\"/g, () => 'defer')
.replace(/,"scripts":\s*\[(.*?)\]/g, () => '')
.replace('preloadScripts(scripts)', () => '')

const rawAssets = compilation.getAssets()
const pages = getPages(rawAssets)
const assets = rawAssets
.filter(({ name }) => /^scripts\/.+\.js$/.test(name))
.map(({ name, source }) => ({
url: `/${name}`,
source: source.source(),
parentPaths: pages.filter(({ scripts }) => scripts.includes(name)).map(({ path }) => path)
}))

const initialScriptsString = html.match(/]*>([\s\S]*?)(?=<\/head>)/)[0]
const initialScriptsStrings = initialScriptsString.split('')
const initialScripts = assets
.filter(({ url }) => initialScriptsString.includes(url))
.map(asset => ({ ...asset, order: initialScriptsStrings.findIndex(script => script.includes(asset.url)) }))
.sort((a, b) => a.order - b.order)
const asyncScripts = assets.filter(asset => !initialScripts.includes(asset))

worker = worker
.replace('INJECT_INITIAL_SCRIPTS_STRING_HERE', () => JSON.stringify(initialScriptsString))
.replace('INJECT_INITIAL_SCRIPTS_HERE', () => JSON.stringify(initialScripts))
.replace('INJECT_ASYNC_SCRIPTS_HERE', () => JSON.stringify(asyncScripts))
.replace('INJECT_HTML_HERE', () => JSON.stringify(html))

writeFileSync(join(__dirname, '..', 'build', '_worker.js'), worker)

callback()
})
}
}

export default InjectAssetsPlugin
```

_[public/\_worker.js](public/_worker.js)_

```js
const initialModuleScriptsString = INJECT_INITIAL_MODULE_SCRIPTS_STRING_HERE
const initialScripts = INJECT_INITIAL_SCRIPTS_HERE
const asyncScripts = INJECT_ASYNC_SCRIPTS_HERE
const html = INJECT_HTML_HERE

const allScripts = [...initialScripts, ...asyncScripts]
const documentHeaders = {
'Cache-Control': 'public, max-age=0, must-revalidate',
'Content-Type': 'text/html; charset=utf-8'
}

const isMatch = (pathname, path) => {
if (pathname === path) return { exactMatch: true, match: true }
if (!path.includes(':')) return { match: false }

const pathnameParts = pathname.split('/')
const pathParts = path.split('/')
const match = pathnameParts.every((part, ind) => part === pathParts[ind] || pathParts[ind]?.startsWith(':'))

return {
match,
exactMatch: match && pathnameParts.length === pathParts.length
}
}

export default {
fetch(request, env) {
const pathname = new URL(request.url).pathname.toLowerCase()
const userAgent = (request.headers.get('User-Agent') || '').toLowerCase()
const xCached = request.headers.get('X-Cached')
const bypassWorker = ['prerender', 'googlebot'].includes(userAgent) || pathname.includes('.')

if (bypassWorker) return env.ASSETS.fetch(request)

const cachedScripts = xCached
? allScripts.filter(({ url }) => xCached.includes(url.match(/(?<=\.)[^.]+(?=\.js$)/)[0]))
: []
const uncachedScripts = allScripts.filter(script => !cachedScripts.includes(script))

if (!uncachedScripts.length) {
return new Response(html, { headers: documentHeaders })
}

let body = html.replace(initialModuleScriptsString, () => '')

const injectedInitialScriptsString = initialScripts
.map(script =>
cachedScripts.includes(script)
? ``
: `${script.source}`
)
.join('\n')

body = body.replace('