{"id":13599690,"url":"https://github.com/theninthsky/client-side-rendering","last_synced_at":"2025-04-10T17:32:04.614Z","repository":{"id":37391970,"uuid":"493401267","full_name":"theninthsky/client-side-rendering","owner":"theninthsky","description":"A case study of CSR.","archived":false,"fork":false,"pushed_at":"2024-09-10T16:59:10.000Z","size":11563,"stargazers_count":783,"open_issues_count":1,"forks_count":35,"subscribers_count":12,"default_branch":"main","last_synced_at":"2024-09-10T18:53:11.095Z","etag":null,"topics":["client-side-rendering","csr","nextjs","performance","seo","server-side-rendering","ssg","ssr","stale-while-revalidate","static-site-generation","swr"],"latest_commit_sha":null,"homepage":"https://client-side-rendering.pages.dev","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/theninthsky.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-05-17T20:18:22.000Z","updated_at":"2024-09-10T16:59:14.000Z","dependencies_parsed_at":"2024-01-14T04:45:49.865Z","dependency_job_id":"1bfb53ae-d403-4c51-bd95-419e6a82ece2","html_url":"https://github.com/theninthsky/client-side-rendering","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/theninthsky%2Fclient-side-rendering","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/theninthsky%2Fclient-side-rendering/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/theninthsky%2Fclient-side-rendering/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/theninthsky%2Fclient-side-rendering/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/theninthsky","download_url":"https://codeload.github.com/theninthsky/client-side-rendering/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":223442601,"owners_count":17145805,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["client-side-rendering","csr","nextjs","performance","seo","server-side-rendering","ssg","ssr","stale-while-revalidate","static-site-generation","swr"],"created_at":"2024-08-01T17:01:09.160Z","updated_at":"2025-04-10T17:32:04.601Z","avatar_url":"https://github.com/theninthsky.png","language":"TypeScript","readme":"\u003ch1 align=\"center\"\u003eClient-side Rendering\u003c/h1\u003e\n\nThis project serves as a case study on CSR, examining the potential of client-side rendered apps in comparison to server-side rendering.\n\nFor a detailed comparison of all rendering methods, visit the project's _Comparison_ page: https://client-side-rendering.pages.dev/comparison\n\nThe findings of this project resulted in the creation of [Adina](https://adinajs.com).\n\n## Table of Contents\n\n- [Intro](#intro)\n- [Motivation](#motivation)\n- [Performance](#performance)\n  - [Bundle Size](#bundle-size)\n  - [Caching](#caching)\n  - [Code Splitting](#code-splitting)\n  - [Preloading Async Pages](#preloading-async-pages)\n  - [Splitting Async Vendors](#splitting-async-vendors)\n  - [Preloading Data](#preloading-data)\n  - [Preloading Next Pages Data](#preloading-next-pages-data)\n  - [Precaching](#precaching)\n  - [Adaptive Source Inlining](#adaptive-source-inlining)\n  - [Leveraging the 304 Status Code](#leveraging-the-304-status-code)\n  - [Navigation Preload](#navigation-preload)\n  - [Tweaking Further](#tweaking-further)\n    - [Transitioning Async Pages](#transitioning-async-pages)\n    - [Revalidating Active Apps](#revalidating-active-apps)\n  - [Summary](#summary)\n  - [Deploying](#deploying)\n  - [Benchmark](#benchmark)\n  - [Areas for Improvement](#areas-for-improvement)\n- [SEO](#seo)\n  - [Indexing](#indexing)\n    - [Google](#google)\n    - [Prerendering](#prerendering)\n  - [Social Media Share Previews](#social-media-share-previews)\n  - [Sitemaps](#sitemaps)\n- [CSR vs. SSR](#csr-vs-ssr)\n  - [SSR Disadvantages](#ssr-disadvantages)\n  - [Why Not SSG?](#why-not-ssg)\n  - [The Cost of Hydration](#the-cost-of-hydration)\n- [Conclusion](#conclusion)\n  - [What Might Change in the Future](#what-might-change-in-the-future)\n\n# Intro\n\n**Client-side rendering (CSR)** refers to sending static assets to the web browser and allowing it to handle the entire rendering process of the app.  \n**Server-side rendering (SSR)** involves rendering the entire app (or page) on the server and delivering a pre-rendered HTML document ready for display.  \n**Static Site Generation (SSG)** is the process of pre-generating HTML pages as static assets, which are then sent and displayed by the browser.\n\nContrary to common belief, the SSR process in modern frameworks like **React**, **Angular**, **Vue**, and **Svelte** results in the app rendering twice: once on the server and again on the browser (this is known as \"hydration\"). Without this second render, the app would be static and uninteractive, essentially behaving like a \"lifeless\" web page.\n\u003cbr\u003e\nInterestingly, the hydration process does not appear to be faster than a typical render (excluding the painting phase, of course).\n\u003cbr\u003e\nIt's also important to note that SSG apps must undergo hydration as well.\n\nIn both SSR and SSG, the HTML document is fully constructed, providing the following benefits:\n\n- Web crawlers can index the pages out-of-the-box, which is crucial for SEO.\n- The first contentful paint (FCP) is usually very fast (although in SSR, this depends heavily on API server response times).\n\nOn the other hand, CSR apps offer the following advantages:\n\n- The app is completely decoupled from the server, meaning it loads independently of the API server's response times, enabling smooth page transitions.\n- The developer experience is streamlined, as there's no need to worry about which parts of the code run on the server and which run in the browser.\n\nIn this case study, we'll focus on CSR and explore ways to overcome its apparent limitations while leveraging its strengths to the peak.\n\nAll optimizations will be incorporated into the deployed app, which can be found here: [https://client-side-rendering.pages.dev](https://client-side-rendering.pages.dev).\n\n# Motivation\n\n_\"Recently, SSR (Server Side Rendering) has taken the JavaScript front-end world by storm. The fact that you can now render your sites and apps on the server before sending them to your clients is an absolutely **revolutionary** idea (and totally not what everyone was doing before JS client-side apps got popular in the first place...)._\n\n_However, the same criticisms that were valid for PHP, ASP, JSP, (and such) sites are valid for server-side rendering today. It's slow, breaks fairly easily, and is difficult to implement properly._\n\n_Thing is, despite what everyone might be telling you, you probably don't need SSR. You can get almost all the advantages of it (without the disadvantages) by using prerendering.\"_\n\n_~[Prerender SPA Plugin](https://github.com/chrisvfritz/prerender-spa-plugin#what-is-prerendering)_\n\nIn recent years, server-side rendering has gained significant popularity in the form of frameworks such as _[Next.js](https://nextjs.org)_ and _[Remix](https://remix.run)_ to the point that developers often default to using them without fully understanding their limitations, even in apps that don't need SEO (e.g., those with login requirements).\n\u003cbr\u003e\nWhile SSR has its advantages, these frameworks continue to emphasize their speed (\"Performance as a default\"), suggesting that client-side rendering (CSR) is inherently slow.\n\u003cbr\u003e\nAdditionally, there is a widespread misconception that perfect SEO can only be achieved with SSR, and that CSR apps cannot be optimized for search engine crawlers.\n\nAnother common argument for SSR is that as web apps grow larger, their loading times will continue to increase, leading to poor _[FCP](https://web.dev/fcp)_ performance for CSR apps.\n\nWhile it’s true that apps are becoming more feature-rich, the size of a single page should actually **decrease** over time.\n\u003cbr\u003e\nThis is due to the trend of creating smaller, more efficient versions of libraries and frameworks, such as _zustand_, _day.js_, _headless-ui_, and _react-router v6_.\n\u003cbr\u003e\nWe can also observe a reduction in the size of frameworks over time: Angular (74.1kb), React (44.5kb), Vue (34kb), Solid (7.6kb), and Svelte (1.7kb).\n\u003cbr\u003e\nThese libraries contribute significantly to the overall weight of a web page’s scripts.\n\u003cbr\u003e\nWith proper code-splitting, the initial loading time of a page could **decrease** over time.\n\nThis project implements a basic CSR app with optimizations like code-splitting and preloading. The goal is for the loading time of individual pages to remain stable as the app scales.\n\u003cbr\u003e\nThe objective is to simulate a production-grade app's package structure and minimize loading times through parallelized requests.\n\nIt’s important to note that improving performance should not come at the cost of developer experience. Therefore, the architecture of this project will be only slightly modified from a typical React setup, avoiding the rigid, opinionated structure of frameworks like Next.js, or the limitations of SSR in general.\n\nThis case study will focus on two main aspects: performance and SEO. We will explore how to achieve top scores in both areas.\n\n_Note that although this project is implemented using React, most of the optimizations are framework-agnostic and are purely based on the bundler and the web browser._\n\n# Performance\n\nWe will assume a standard Webpack (Rspack) setup and add the required customizations as we progress.\n\n## Bundle Size\n\nThe first rule of thumb is to minimize dependencies and, among those, choose the ones with the smallest file sizes.\n\nFor example:\n\u003cbr\u003e\nWe can use _[day.js](https://www.npmjs.com/package/dayjs)_ instead of _[moment](https://www.npmjs.com/package/moment)_, _[zustand](https://www.npmjs.com/package/zustand)_ instead of _[redux toolkit](https://www.npmjs.com/package/@reduxjs/toolkit)_ , etc.\n\nThis is important not only for CSR apps but also for SSR (and SSG) apps, as larger bundles result in longer load times, delaying when the page becomes visible or interactive.\n\n## Caching\n\nIdeally, every hashed file should be cached, and `index.html` should **never** be cached.\n\u003cbr\u003e\nIt means that the browser would initially cache `main.[hash].js` and would have to redownload it only if its hash (content) changes:\n\n![Network Bundled](images/network-bundled.png)\n\nHowever, since `main.js` includes the entire bundle, the slightest change in code would cause its cache to expire, meaning the browser would have to download it again.\n\u003cbr\u003e\nNow, what part of our bundle comprises most of its weight? The answer is the **dependencies**, also called **vendors**.\n\nSo if we could split the vendors to their own hashed chunk, that would allow a separation between our code and the vendors code, leading to less cache invalidations.\n\nLet's add the following _optimization_ to our config file:\n\n_[rspack.config.js](rspack.config.js)_\n\n```js\nexport default () =\u003e {\n  return {\n    optimization: {\n      runtimeChunk: 'single',\n      splitChunks: {\n        chunks: 'initial',\n        cacheGroups: {\n          vendors: {\n            test: /[\\\\/]node_modules[\\\\/]/,\n            name: 'vendors'\n          }\n        }\n      }\n    }\n  }\n}\n```\n\nThis will create a `vendors.[hash].js` file:\n\n![Network Vendors](images/network-vendors.png)\n\nAlthough this is a substantial improvement, what would happen if we updated a very small dependency?\n\u003cbr\u003e\nIn such case, the entire vendors chunk's cache will invalidate.\n\nSo, in order to improve it even further, we will split **each dependency** to its own hashed chunk:\n\n_[rspack.config.js](rspack.config.js)_\n\n```diff\n- name: 'vendors'\n+ name: module =\u003e {\n+  const moduleName = (module.context.match(/[\\\\/]node_modules[\\\\/](.*?)([\\\\/]|$)/) || [])[1]\n+\n+  return moduleName.replace('@', '')\n+ }\n```\n\nThis will create files like `react-dom.[hash].js` which contain a single big vendor and a `[id].[hash].js` file which contains all the remaining (small) vendors:\n\n![Network Split Vendors](images/network-split-vendors.png)\n\nMore info about the default configurations (such as the split threshold size) can be found here:\n\u003cbr\u003e\nhttps://webpack.js.org/plugins/split-chunks-plugin/#defaults\n\n## Code Splitting\n\nA lot of the features we write end up being used only in a few of our pages, so we would like them to be loaded only when the user visits the page they are being used in.\n\nFor Example, we wouldn't want users to have to wait until the _[react-big-calendar](https://www.npmjs.com/package/react-big-calendar)_ package is downloaded, parsed and executed if they merely loaded the _Home_ page. We would only want that to happen when they visit the _Calendar_ page.\n\nThe way we can achieve this is (preferably) by route-based code splitting:\n\n_[App.tsx](src/App.tsx)_\n\n```js\nconst Home = lazy(() =\u003e import(/* webpackChunkName: 'home' */ 'pages/Home'))\nconst LoremIpsum = lazy(() =\u003e import(/* webpackChunkName: 'lorem-ipsum' */ 'pages/LoremIpsum'))\nconst Pokemon = lazy(() =\u003e import(/* webpackChunkName: 'pokemon' */ 'pages/Pokemon'))\n```\n\nSo when users visit the _Pokemon_ page, they only download the main chunk scripts (which includes all shared dependencies such as the framework) and the `pokemon.[hash].js` chunk.\n\n_Note: it is encouraged to download the entire app so that users will experience instant, app-like, navigations. But it is a bad idea to batch all assets into a single script, delaying the first render of the page.\n\u003cbr\u003e\nThese assets should be downloaded asynchronously and only after the user-requested page has finished rendering and is entirely visible._\n\n## Preloading Async Pages\n\nCode splitting has one major flaw - the runtime doesn't know which async chunks are needed until the main script executes, leading to them being fetched in a significant delay (since they make another round-trip to the CDN):\n\n![Network Code Splitting](images/network-code-splitting.png)\n\nThe way we can solve this issue is by writing a custom plugin that will embed a script in the document which will be responsible of preloading relevant assets:\n\n_[rspack.config.js](rspack.config.js)_\n\n```js\nimport InjectAssetsPlugin from './scripts/inject-assets-plugin.js'\n\nexport default () =\u003e {\n  return {\n    plugins: [new InjectAssetsPlugin()]\n  }\n}\n```\n\n_[scripts/inject-assets-plugin.js](scripts/inject-assets-plugin.js)_\n\n```js\nimport { join } from 'node:path'\nimport { readFileSync } from 'node:fs'\nimport HtmlPlugin from 'html-webpack-plugin'\n\nimport pagesManifest from '../src/pages.js'\n\nconst __dirname = import.meta.dirname\n\nconst getPages = rawAssets =\u003e {\n  const pages = Object.entries(pagesManifest).map(([chunk, { path, title }]) =\u003e {\n    const script = rawAssets.find(name =\u003e name.includes(`/${chunk}.`) \u0026\u0026 name.endsWith('.js'))\n\n    return { path, script, title }\n  })\n\n  return pages\n}\n\nclass InjectAssetsPlugin {\n  apply(compiler) {\n    compiler.hooks.compilation.tap('InjectAssetsPlugin', compilation =\u003e {\n      HtmlPlugin.getCompilationHooks(compilation).beforeEmit.tapAsync('InjectAssetsPlugin', (data, callback) =\u003e {\n        const preloadAssets = readFileSync(join(__dirname, '..', 'scripts', 'preload-assets.js'), 'utf-8')\n\n        const rawAssets = compilation.getAssets()\n        const pages = getPages(rawAssets)\n\n        let { html } = data\n\n        html = html.replace(\n          '\u003c/title\u003e',\n          () =\u003e `\u003c/title\u003e\u003cscript id=\"preload-data\"\u003econst pages=${stringifiedPages}\\n${preloadAssets}\u003c/script\u003e`\n        )\n\n        callback(null, { ...data, html })\n      })\n    })\n  }\n}\n\nexport default InjectAssetsPlugin\n```\n\n_[scripts/preload-assets.js](scripts/preload-assets.js)_\n\n```js\nconst getPathname = () =\u003e {\n  let { pathname } = window.location\n\n  if (pathname !== '/') pathname = pathname.replace(/\\/$/, '')\n\n  return pathname\n}\n\nconst getPage = (pathname = getPathname()) =\u003e {\n  const potentiallyMatchingPages = pages\n    .map(page =\u003e ({ ...isMatch(pathname, page.path), ...page }))\n    .filter(({ match }) =\u003e match)\n\n  return potentiallyMatchingPages.find(({ exactMatch }) =\u003e exactMatch) || potentiallyMatchingPages[0]\n}\n\nconst isMatch = (pathname, path) =\u003e {\n  if (pathname === path) return { exactMatch: true, match: true }\n  if (!path.includes(':')) return { match: false }\n\n  const pathnameParts = pathname.split('/')\n  const pathParts = path.split('/')\n  const match = pathnameParts.every((part, ind) =\u003e part === pathParts[ind] || pathParts[ind]?.startsWith(':'))\n\n  return {\n    match,\n    exactMatch: match \u0026\u0026 pathnameParts.length === pathParts.length\n  }\n}\n\nconst preloadScript = script =\u003e {\n  document.head.appendChild(\n    Object.assign(document.createElement('link'), { rel: 'preload', href: '/' + script, as: 'script' })\n  )\n}\n\nconst currentPage = getPage()\n\nif (currentPage) {\n  const { path, title, script } = currentPage\n\n  preloadScript(script)\n\n  if (title) document.title = title\n}\n```\n\nThe imported `pages.js` file can be found [here](src/pages.js).\n\nThis way, the browser is able to fetch the page-specific script chunk **in parallel** with render-critical assets:\n\n![Network Async Chunks Preload](images/network-async-chunks-preload.png)\n\n## Splitting Async Vendors\n\nCode splitting introduces another problem: async vendor duplication.\n\nSay we have two async chunks: `lorem-ipsum.[hash].js` and `pokemon.[hash].js`.\nIf they both include the same dependency that is not part of the main chunk, that means the user will download that dependency **twice**.\n\nSo if that said dependency is `moment` and it weighs 72kb minzipped, then both async chunk's size will be **at least** 72kb.\n\nWe need to split this dependency from these async chunks so that it could be shared between them:\n\n_[rspack.config.js](rspack.config.js)_\n\n```diff\noptimization: {\n  runtimeChunk: 'single',\n  splitChunks: {\n    chunks: 'initial',\n    cacheGroups: {\n      vendors: {\n        test: /[\\\\/]node_modules[\\\\/]/,\n+       chunks: 'all',\n        name: ({ context }) =\u003e (context.match(/[\\\\/]node_modules[\\\\/](.*?)([\\\\/]|$)/) || [])[1].replace('@', '')\n      }\n    }\n  }\n}\n```\n\nNow both `lorem-ipsum.[hash].js` and `pokemon.[hash].js` will use the extracted `moment.[hash].js` chunk, sparing the user a lot of network traffic (and giving these assets better cache persistence).\n\nHowever, we have no way of telling which async vendor chunks will be split before we build the application, so we wouldn't know which async vendor chunks we need to preload (refer to the \"Preloading Async Chunks\" section):\n\n![Network Split Async Vendors](images/network-split-async-vendors.png)\n\nThat's why we will append the chunks names to the async vendor's name:\n\n_[rspack.config.js](rspack.config.js)_\n\n```diff\noptimization: {\n  runtimeChunk: 'single',\n  splitChunks: {\n    chunks: 'initial',\n    cacheGroups: {\n      vendors: {\n        test: /[\\\\/]node_modules[\\\\/]/,\n        chunks: 'all',\n-       name: ({ context }) =\u003e (context.match(/[\\\\/]node_modules[\\\\/](.*?)([\\\\/]|$)/) || [])[1].replace('@', '')\n+       name: (module, chunks) =\u003e {\n+         const allChunksNames = chunks.map(({ name }) =\u003e name).join('.')\n+         const moduleName = (module.context.match(/[\\\\/]node_modules[\\\\/](.*?)([\\\\/]|$)/) || [])[1]\n\n+         return `${moduleName}.${allChunksNames}`.replace('@', '')\n        }\n      }\n    }\n  }\n}\n```\n\n_[scripts/inject-assets-plugin.js](scripts/inject-assets-plugin.js)_\n\n```diff\nconst getPages = rawAssets =\u003e {\n  const pages = Object.entries(pagesManifest).map(([chunk, { path, title }]) =\u003e {\n-   const script = rawAssets.find(name =\u003e name.includes(`/${chunk}.`) \u0026\u0026 name.endsWith('.js'))\n+   const scripts = rawAssets.filter(name =\u003e new RegExp(`[/.]${chunk}\\\\.(.+)\\\\.js$`).test(name))\n\n-   return { path, title, script }\n+   return { path, title, scripts }\n  })\n\n  return pages\n}\n```\n\n_[scripts/preload-assets.js](scripts/preload-assets.js)_\n\n```diff\n- const preloadScript = script =\u003e {\n+ const preloadScripts = scripts =\u003e {\n+  scripts.forEach(script =\u003e {\n     document.head.appendChild(\n       Object.assign(document.createElement('link'), { rel: 'preload', href: '/' + script, as: 'script' })\n     )\n+  })\n }\n.\n.\n.\n if (currentPage) {\n-  const { path, title, script } = currentPage\n+  const { path, title, scripts } = currentPage\n\n-  preloadScript(currentPage)\n+  preloadScripts(currentPage)\n\n   if (title) document.title = title\n }\n```\n\nNow all async vendor chunks will be fetched in parallel with their parent async chunk:\n\n![Network Split Async Vendors Preload](images/network-split-async-vendors-preload.png)\n\n## Preloading Data\n\nOne of the presumed disadvantages of CSR over SSR is that the page's data (fetch requests) will be fired only after JS has been downloaded, parsed and executed in the browser:\n\n![Network Data](images/network-data.png)\n\nTo overcome this, we will use preloading once again, this time for the data itself, by patching the `fetch` API:\n\n_[scripts/inject-assets-plugin.js](scripts/inject-assets-plugin.js)_\n\n```diff\nconst getPages = rawAssets =\u003e {\n-  const pages = Object.entries(pagesManifest).map(([chunk, { path, title }]) =\u003e {\n+  const pages = Object.entries(pagesManifest).map(([chunk, { path, title, data }]) =\u003e {\n  const scripts = rawAssets.filter(name =\u003e new RegExp(`[/.]${chunk}\\\\.(.+)\\\\.js$`).test(name))\n\n-   return { path, title, script }\n+   return { path, title, scripts, data }\n  })\n\n  return pages\n}\n\nHtmlPlugin.getCompilationHooks(compilation).beforeEmit.tapAsync('InjectAssetsPlugin', (data, callback) =\u003e {\n  const preloadAssets = readFileSync(join(__dirname, '..', 'scripts', 'preload-assets.js'), 'utf-8')\n\n  const rawAssets = compilation.getAssets()\n  const pages = getPages(rawAssets)\n+ const stringifiedPages = JSON.stringify(pages, (_, value) =\u003e {\n+   return typeof value === 'function' ? `func:${value.toString()}` : value\n+ })\n\n  let { html } = data\n\n  html = html.replace(\n    '\u003c/title\u003e',\n-   () =\u003e `\u003c/title\u003e\u003cscript id=\"preload-data\"\u003econst pages=${JSON.stringify(pages)}\\n${preloadAssets}\u003c/script\u003e`\n+   () =\u003e `\u003c/title\u003e\u003cscript id=\"preload-data\"\u003econst pages=${stringifiedPages}\\n${preloadAssets}\u003c/script\u003e`\n  )\n\n  callback(null, { ...data, html })\n})\n```\n\n_[scripts/preload-assets.js](scripts/preload-assets.js)_\n\n```js\nconst preloadResponses = {}\n\nconst originalFetch = window.fetch\n\nwindow.fetch = async (input, options) =\u003e {\n  const requestID = `${input.toString()}${options?.body?.toString() || ''}`\n  const preloadResponse = preloadResponses[requestID]\n\n  if (preloadResponse) {\n    if (!options?.preload) delete preloadResponses[requestID]\n\n    return preloadResponse\n  }\n\n  const response = originalFetch(input, options)\n\n  if (options?.preload) preloadResponses[requestID] = response\n\n  return response\n}\n.\n.\n.\nconst preloadData = ({ pathname = getPathname(), path, data }) =\u003e {\n  data.forEach(({ url, preconnect, ...request }) =\u003e {\n    if (url.startsWith('func:')) url = eval(url.replace('func:', ''))\n\n    const constructedURL = typeof url === 'string' ? url : url(getDynamicProperties(pathname, path))\n\n    fetch(constructedURL, { ...request, preload: true })\n\n    preconnect?.forEach(url =\u003e {\n      document.head.appendChild(Object.assign(document.createElement('link'), { rel: 'preconnect', href: url }))\n    })\n  })\n}\n\nconst getDynamicProperties = (pathname, path) =\u003e {\n  const pathParts = path.split('/')\n  const pathnameParts = pathname.split('/')\n  const dynamicProperties = {}\n\n  for (let i = 0; i \u003c pathParts.length; i++) {\n    if (pathParts[i].startsWith(':')) dynamicProperties[pathParts[i].slice(1)] = pathnameParts[i]\n  }\n\n  return dynamicProperties\n}\n\nconst currentPage = getPage()\n\nif (currentPage) {\n  const { path, title, scripts, data } = currentPage\n\n  preloadScripts(scripts)\n\n  if (data) preloadData({ path, data })\n  if (title) document.title = title\n}\n```\n\nReminder: the `pages.js` file can be found [here](src/pages.js).\n\nNow we can see that the data is being fetched right away:\n\n![Network Data Preload](images/network-data-preload.png)\n\nThe above script will even preload dynamic routes data (such as _[pokemon/:name](https://client-side-rendering.pages.dev/pokemon/pikachu)_).\n\n## Preloading Next Pages Data\n\nWe can conveniently preload page data by hovering over links or clicking them, before they are fully rendered:\n\n_[src/utils/data-preload.ts](src/utils/data-preload.ts)_\n\n```ts\ndeclare function getPage(path: string): Page | undefined\n\ndeclare function preloadData(page: Page): void\n\nexport enum DataType {\n  Static = 'static',\n  Dynamic = 'dynamic'\n}\n\ntype Page = {\n  pathname: string\n  title?: string\n  data?: Request[]\n}\n\ntype Request = RequestInit \u0026 {\n  url: string\n  static?: boolean\n  preconnect?: string[]\n}\n\ntype Events = {\n  [event: string]: DataType\n}\n\ntype EventHandlers = {\n  [event: string]: () =\u003e void\n}\n\nconst defaultEvents: Events = {\n  onMouseEnter: DataType.Static,\n  onTouchStart: DataType.Static,\n  onMouseDown: DataType.Dynamic,\n  onClick: DataType.Dynamic\n}\n\nexport const getDataPreloadHandlers = (pathname: string, events: Events = defaultEvents) =\u003e {\n  const handlers: EventHandlers = {}\n  const page = getPage(pathname)\n  const { data } = page || {}\n\n  if (!data) return handlers\n\n  const staticData = data.filter(data =\u003e data.static)\n  const dynamicData = data.filter(data =\u003e !data.static)\n\n  for (const event in events) {\n    const relevantData = events[event] === DataType.Static ? staticData : dynamicData\n\n    if (relevantData.length) {\n      handlers[event] = () =\u003e {\n        preloadData({ ...page, pathname, data: relevantData })\n        delete handlers[event]\n      }\n    }\n  }\n\n  return handlers\n}\n```\n\nBy default, `getDataPreloadHandlers` returns event listeners that preload static data when a link is hovered (on desktop) or touched (on mobile), and preload database-dependent data when a link is pressed (on desktop) or fully clicked (on mobile).\n\n## Precaching\n\nUsers should have a smooth navigation experience in our app.\n\u003cbr\u003e\nHowever, splitting every page causes a noticeable delay in navigation, since every page has to be downloaded (on-demand) before it can be rendered on screen.\n\nWe would want to prefetch and cache all pages ahead of time.\n\nWe can do this by writing a simple service worker:\n\n_[rspack.config.js](rspack.config.js)_\n\n```js\nimport { InjectManifestPlugin } from 'inject-manifest-plugin'\n\nimport InjectAssetsPlugin from './scripts/inject-assets-plugin.js'\n\nexport default () =\u003e {\n  return {\n    plugins: [\n      new InjectManifest({\n        include: [/fonts\\//, /scripts\\/.+\\.js$/],\n        swSrc: join(__dirname, 'public', 'service-worker.js'),\n        compileSrc: false,\n        maximumFileSizeToCacheInBytes: 10000000\n      }),\n      new InjectAssetsPlugin()\n    ]\n  }\n}\n```\n\n_[src/utils/service-worker-registration.ts](src/utils/service-worker-registration.ts)_\n\n```js\nconst register = () =\u003e {\n  window.addEventListener('load', async () =\u003e {\n    try {\n      await navigator.serviceWorker.register('/service-worker.js')\n\n      console.log('Service worker registered!')\n    } catch (err) {\n      console.error(err)\n    }\n  })\n}\n\nconst unregister = async () =\u003e {\n  try {\n    const registration = await navigator.serviceWorker.ready\n\n    await registration.unregister()\n\n    console.log('Service worker unregistered!')\n  } catch (err) {\n    console.error(err)\n  }\n}\n\nif ('serviceWorker' in navigator) {\n  const shouldRegister = process.env.NODE_ENV !== 'development'\n\n  if (shouldRegister) register()\n  else unregister()\n}\n```\n\n_[public/service-worker.js](public/service-worker.js)_\n\n```js\nconst CACHE_NAME = 'my-csr-app'\n\nconst allAssets = self.__WB_MANIFEST.map(({ url }) =\u003e url)\n\nconst getCache = () =\u003e caches.open(CACHE_NAME)\n\nconst getCachedAssets = async cache =\u003e {\n  const keys = await cache.keys()\n\n  return keys.map(({ url }) =\u003e `/${url.replace(self.registration.scope, '')}`)\n}\n\nconst precacheAssets = async () =\u003e {\n  const cache = await getCache()\n  const cachedAssets = await getCachedAssets(cache)\n  const assetsToPrecache = allAssets.filter(asset =\u003e !cachedAssets.includes(asset) \u0026\u0026 !ignoreAssets.includes(asset))\n\n  await cache.addAll(assetsToPrecache)\n  await removeUnusedAssets()\n}\n\nconst removeUnusedAssets = async () =\u003e {\n  const cache = await getCache()\n  const cachedAssets = await getCachedAssets(cache)\n\n  cachedAssets.forEach(asset =\u003e {\n    if (!allAssets.includes(asset)) cache.delete(asset)\n  })\n}\n\nconst fetchAsset = async request =\u003e {\n  const cache = await getCache()\n  const cachedResponse = await cache.match(request)\n\n  return cachedResponse || fetch(request)\n}\n\nself.addEventListener('install', event =\u003e {\n  event.waitUntil(precacheAssets())\n  self.skipWaiting()\n})\n\nself.addEventListener('fetch', event =\u003e {\n  const { request } = event\n\n  if (['font', 'script'].includes(request.destination)) event.respondWith(fetchAsset(request))\n})\n```\n\nNow all pages will be prefetched and cached even before the user tries to navigate to them.\n\nThis approach will also generate a full _[code cache](https://v8.dev/blog/code-caching-for-devs#use-service-worker-caches)_.\n\n## Adaptive Source Inlining\n\nWhen inspecting our 43kb `react-dom.js` file, we can see that the time it took for the request to return was 60ms while the time it took to download the file was 3ms:\n\n![CDN Response Time](images/cdn-response-time.png)\n\nThis demonstrates the well-known fact that [RTT](https://en.wikipedia.org/wiki/Round-trip_delay) has a huge impact on web pages load times, sometimes even more than download speed, and even when assets are served from a nearby CDN edge like in our case.\n\nAdditionally and more importantly, we can see that after the HTML file is downloaded, we have a large timespan where the browser stays idle and just waits for the scripts to arrive:\n\n![Browser Idle Period](images/browser-idle-period.png)\n\nThis is a lot of precious time (marked in red) that the browser could use to download, parse and even execute scripts, speeding up the page's visibility and interactivity.\n\u003cbr\u003e\nThis inefficiency will reoccur every time assets change (partial cache). This isn't something that only happens on the very first visit.\n\nSo how can we eliminate this idle time?\n\u003cbr\u003e\nWe could inline all the initial (critical) scripts in the document, so that they will start to download, parse and execute until the async page assets arrive:\n\n![Inline Initial Scripts](images/inline-initial-scripts.png)\n\nWe can see that the browser now gets its initial scripts without having to send another request to the CDN.\n\u003cbr\u003e\nSo the browser will first send requests for the async chunks and the preloaded data, and while these are pending, it will continue to download and execute the main scripts.\n\u003cbr\u003e\nWe can see that the async chunks start to download (marked in blue) right after the HTML file finishes downloading, parsing and executing, which saves a lot of time.\n\nWhile this change is making a significant difference on fast networks, it is even more crucial for slower networks, where the delay is larger and the RTT is much more impactful.\n\nHowever, this solution has 2 major issues:\n\n1. We wouldn’t want users to download the 100kb+ HTML file every time they visit our app. We only want that to happen for the very first visit.\n2. Since we do not inline the async page’s assets as well, we would probably still be waiting for them to fetch even after the entire HTML has finished downloading, parsing and executing.\n\nTo overcome these issues, we can no longer stick to a static HTML file, and so we shall leaverage the power of a server. Or, more precisely, the power of a Cloudflare serverless worker.\n\u003cbr\u003e\nThis worker should intercept every HTML document request and tailor a response that fits it perfectly.\n\nThe entire flow should be described as follows:\n\n1. The browser sends an HTML document request to the Clouflare worker.\n2. The Clouflare worker checks for the existence of an `X-Cached` header in the request.\n   If such header exists, it will iterate over its values and inline only the relevant* assets that are absent from it in the response.\n   If such header doesn't exist, it will inline all the relevant* assets in the response.\n3. The app will then extract all of the inlined assets, cache them in a service worker and then precache all of the other assets.\n4. The next time the page is reloaded, the service worker will send the HTML document along with an `X-Cached` header specifying all of its cached assets.\n\n\\* Both initial and page-specific assets.\n\nThis ensures that the browser receives exactly the assets it needs (no more, no less) to display the current page **in a single roundtrip**!\n\n_[scripts/inject-assets-plugin.js](scripts/inject-assets-plugin.js)_\n\n```js\nclass InjectAssetsPlugin {\n  apply(compiler) {\n    const production = compiler.options.mode === 'production'\n\n    compiler.hooks.compilation.tap('InjectAssetsPlugin', compilation =\u003e {\n      .\n      .\n      .\n    })\n\n    if (!production) return\n\n    compiler.hooks.afterEmit.tapAsync('InjectAssetsPlugin', (compilation, callback) =\u003e {\n      let worker = readFileSync(join(__dirname, '..', 'build', '_worker.js'), 'utf-8')\n      let html = readFileSync(join(__dirname, '..', 'build', 'index.html'), 'utf-8')\n\n      html = html\n        .replace(/type=\\\"module\\\"/g, () =\u003e 'defer')\n        .replace(/,\"scripts\":\\s*\\[(.*?)\\]/g, () =\u003e '')\n        .replace('preloadScripts(scripts)', () =\u003e '')\n\n      const rawAssets = compilation.getAssets()\n      const pages = getPages(rawAssets)\n      const assets = rawAssets\n        .filter(({ name }) =\u003e /^scripts\\/.+\\.js$/.test(name))\n        .map(({ name, source }) =\u003e ({\n          url: `/${name}`,\n          source: source.source(),\n          parentPaths: pages.filter(({ scripts }) =\u003e scripts.includes(name)).map(({ path }) =\u003e path)\n        }))\n\n      const initialScriptsString = html.match(/\u003cscript\\s+defer[^\u003e]*\u003e([\\s\\S]*?)(?=\u003c\\/head\u003e)/)[0]\n      const initialScriptsStrings = initialScriptsString.split('\u003c/script\u003e')\n      const initialScripts = assets\n        .filter(({ url }) =\u003e initialScriptsString.includes(url))\n        .map(asset =\u003e ({ ...asset, order: initialScriptsStrings.findIndex(script =\u003e script.includes(asset.url)) }))\n        .sort((a, b) =\u003e a.order - b.order)\n      const asyncScripts = assets.filter(asset =\u003e !initialScripts.includes(asset))\n\n      worker = worker\n        .replace('INJECT_INITIAL_SCRIPTS_STRING_HERE', () =\u003e JSON.stringify(initialScriptsString))\n        .replace('INJECT_INITIAL_SCRIPTS_HERE', () =\u003e JSON.stringify(initialScripts))\n        .replace('INJECT_ASYNC_SCRIPTS_HERE', () =\u003e JSON.stringify(asyncScripts))\n        .replace('INJECT_HTML_HERE', () =\u003e JSON.stringify(html))\n\n      writeFileSync(join(__dirname, '..', 'build', '_worker.js'), worker)\n\n      callback()\n    })\n  }\n}\n\nexport default InjectAssetsPlugin\n```\n\n_[public/\\_worker.js](public/_worker.js)_\n\n```js\nconst initialModuleScriptsString = INJECT_INITIAL_MODULE_SCRIPTS_STRING_HERE\nconst initialScripts = INJECT_INITIAL_SCRIPTS_HERE\nconst asyncScripts = INJECT_ASYNC_SCRIPTS_HERE\nconst html = INJECT_HTML_HERE\n\nconst allScripts = [...initialScripts, ...asyncScripts]\nconst documentHeaders = {\n  'Cache-Control': 'public, max-age=0, must-revalidate',\n  'Content-Type': 'text/html; charset=utf-8'\n}\n\nconst isMatch = (pathname, path) =\u003e {\n  if (pathname === path) return { exactMatch: true, match: true }\n  if (!path.includes(':')) return { match: false }\n\n  const pathnameParts = pathname.split('/')\n  const pathParts = path.split('/')\n  const match = pathnameParts.every((part, ind) =\u003e part === pathParts[ind] || pathParts[ind]?.startsWith(':'))\n\n  return {\n    match,\n    exactMatch: match \u0026\u0026 pathnameParts.length === pathParts.length\n  }\n}\n\nexport default {\n  fetch(request, env) {\n    const pathname = new URL(request.url).pathname.toLowerCase()\n    const userAgent = (request.headers.get('User-Agent') || '').toLowerCase()\n    const xCached = request.headers.get('X-Cached')\n    const bypassWorker = ['prerender', 'googlebot'].includes(userAgent) || pathname.includes('.')\n\n    if (bypassWorker) return env.ASSETS.fetch(request)\n\n    const cachedScripts = xCached\n      ? allScripts.filter(({ url }) =\u003e xCached.includes(url.match(/(?\u003c=\\.)[^.]+(?=\\.js$)/)[0]))\n      : []\n    const uncachedScripts = allScripts.filter(script =\u003e !cachedScripts.includes(script))\n\n    if (!uncachedScripts.length) {\n      return new Response(html, { headers: documentHeaders })\n    }\n\n    let body = html.replace(initialModuleScriptsString, () =\u003e '')\n\n    const injectedInitialScriptsString = initialScripts\n      .map(script =\u003e\n        cachedScripts.includes(script)\n          ? `\u003cscript src=\"${script.url}\"\u003e\u003c/script\u003e`\n          : `\u003cscript id=\"${script.url}\"\u003e${script.source}\u003c/script\u003e`\n      )\n      .join('\\n')\n\n    body = body.replace('\u003c/body\u003e', () =\u003e `\u003c!-- INJECT_ASYNC_SCRIPTS_HERE --\u003e${injectedInitialScriptsString}\\n\u003c/body\u003e`)\n\n    asyncScripts.forEach(script =\u003e {\n      const parentsPaths = script.parentPaths.map(path =\u003e ({ path, ...isMatch(pathname, path) }))\n\n      script.exactMatch = parentsPaths.some(({ exactMatch }) =\u003e exactMatch)\n\n      if (!script.exactMatch) script.match = parentsPaths.some(({ match }) =\u003e match)\n    })\n\n    const exactMatchingPageScripts = asyncScripts.filter(({ exactMatch }) =\u003e exactMatch)\n    const pageScripts = exactMatchingPageScripts.length\n      ? exactMatchingPageScripts\n      : asyncScripts.filter(({ match }) =\u003e match)\n    const uncachedPageScripts = pageScripts.filter(script =\u003e !cachedScripts.includes(script))\n    const injectedAsyncScriptsString = uncachedPageScripts.reduce(\n      (str, { url, source }) =\u003e `${str}\\n\u003cscript id=\"${url}\"\u003e${source}\u003c/script\u003e`,\n      ''\n    )\n\n    body = body.replace('\u003c!-- INJECT_ASYNC_SCRIPTS_HERE --\u003e', () =\u003e injectedAsyncScriptsString)\n\n    return new Response(body, { headers: documentHeaders })\n  }\n}\n```\n\n_[src/utils/extract-inline-scripts.ts](src/utils/extract-inline-scripts.ts)_\n\n```js\nconst extractInlineScripts = () =\u003e {\n  const inlineScripts = [...document.body.querySelectorAll('script[id]:not([src])')].map(({ id, textContent }) =\u003e ({\n    url: id,\n    source: textContent\n  }))\n\n  return inlineScripts\n}\n\nexport default extractInlineScripts\n```\n\n_[src/utils/service-worker-registration.ts](src/utils/service-worker-registration.ts)_\n\n```js\nimport extractInlineScripts from './extract-inline-scripts'\n\nconst register = () =\u003e {\n  window.addEventListener(\n    'load',\n    async () =\u003e {\n      try {\n        const registration = await navigator.serviceWorker.register('/service-worker.js')\n\n        console.log('Service worker registered!')\n\n        registration.addEventListener('updatefound', () =\u003e {\n          registration.installing?.postMessage({ inlineAssets: extractInlineScripts() })\n        })\n      } catch (err) {\n        console.error(err)\n      }\n    },\n    { once: true }\n  )\n}\n```\n\n_[public/service-worker.js](public/service-worker.js)_\n\n```js\nconst CACHE_NAME = 'my-csr-app'\n\nconst allAssets = self.__WB_MANIFEST.map(({ url }) =\u003e url)\n\nconst createPromiseResolve = () =\u003e {\n  let resolve\n  const promise = new Promise(res =\u003e (resolve = res))\n\n  return [promise, resolve]\n}\n\nconst [precacheAssetsPromise, precacheAssetsResolve] = createPromiseResolve()\n\nconst getCache = () =\u003e caches.open(CACHE_NAME)\n\nconst getCachedAssets = async cache =\u003e {\n  const keys = await cache.keys()\n\n  return keys.map(({ url }) =\u003e `/${url.replace(self.registration.scope, '')}`)\n}\n\nconst cacheInlineAssets = async assets =\u003e {\n  const cache = await getCache()\n\n  assets.forEach(({ url, source }) =\u003e {\n    const response = new Response(source, {\n      headers: {\n        'Cache-Control': 'public, max-age=31536000, immutable',\n        'Content-Type': 'application/javascript'\n      }\n    })\n\n    cache.put(url, response)\n\n    console.log(`Cached %c${url}`, 'color: yellow; font-style: italic;')\n  })\n}\n\nconst precacheAssets = async ({ ignoreAssets }) =\u003e {\n  const cache = await getCache()\n  const cachedAssets = await getCachedAssets(cache)\n  const assetsToPrecache = allAssets.filter(asset =\u003e !cachedAssets.includes(asset) \u0026\u0026 !ignoreAssets.includes(asset))\n\n  await cache.addAll(assetsToPrecache)\n  await removeUnusedAssets()\n  await fetchDocument('/')\n}\n\nconst removeUnusedAssets = async () =\u003e {\n  const cache = await getCache()\n  const cachedAssets = await getCachedAssets(cache)\n\n  cachedAssets.forEach(asset =\u003e {\n    if (!allAssets.includes(asset)) cache.delete(asset)\n  })\n}\n\nconst fetchDocument = async url =\u003e {\n  const cache = await getCache()\n  const cachedAssets = await getCachedAssets(cache)\n  const cachedDocument = await cache.match('/')\n\n  try {\n    const response = await fetch(url, {\n      headers: { 'X-Cached': cachedAssets.join(', ') }\n    })\n\n    return response\n  } catch (err) {\n    return cachedDocument\n  }\n}\n\nconst fetchAsset = async request =\u003e {\n  const cache = await getCache()\n  const cachedResponse = await cache.match(request)\n\n  return cachedResponse || fetch(request)\n}\n\nself.addEventListener('install', event =\u003e {\n  event.waitUntil(precacheAssetsPromise)\n  self.skipWaiting()\n})\n\nself.addEventListener('message', async event =\u003e {\n  const { inlineAssets } = event.data\n\n  await cacheInlineAssets(inlineAssets)\n  await precacheAssets({ ignoreAssets: inlineAssets.map(({ url }) =\u003e url) })\n\n  precacheAssetsResolve()\n})\n\nself.addEventListener('fetch', event =\u003e {\n  const { request } = event\n\n  if (request.destination === 'document') return event.respondWith(fetchDocument(request.url))\n  if (['font', 'script'].includes(request.destination)) event.respondWith(fetchAsset(request))\n})\n```\n\nThe results for a fresh (entirely uncached) load are exceptional:\n\n![Inline Scripts](images/inline-scripts.png)\n\n![Inline Scripts CDN Response Time](images/inline-scripts-cdn-response-time.png)\n\n![Inline Scripts Parsing Breakdown](images/inline-scripts-parsing-breakdown.png)\n\nOn the next load, the Cloudflare worker responds with a minimal (1.8kb) HTML document and all assets are immediately served from the cache.\n\nThis optimization leads us to another one - splitting chunks to even smaller pieces.\n\nAs a rule of thumb, splitting the bundle into too many chunks can hurt performance. This is because the page won't be rendered until all of its files are downloaded, and the more chunks there are, the greater the likelihood that one of them will be delayed (as hardware and network speed are non-linear).\n\u003cbr\u003e\nBut in our case its irrelevant, since we inline all the relevant chunks and so they are fetched all at once.\n\n_[rspack.config.js](rspack.config.js)_\n\n```diff\noptimization: {\n  splitChunks: {\n    chunks: 'initial',\n    cacheGroups: {\n      vendors: {\n+       minSize: 10000,\n      }\n    }\n  }\n},\n```\n\nThis extreme splitting will lead to a better cache persistence, and in turn, to faster load times with partial cache.\n\n## Leveraging the 304 Status Code\n\nWhen a static asset is fetched from a CDN, it includes an `ETag` header, which is a content hash of the resource. On subsequent requests, the browser checks if it has a stored ETag. If it does, it sends the ETag in an `If-None-Match` header. The CDN then compares the received ETag with the current one: if they match, it returns a `304 Not Modified` status, indicating the browser can use the cached asset; if not, it returns the new asset with a `200` status.\n\nIn a traditional CSR app, reloading a page results in the HTML getting a `304 Not Modified`, with other assets served from the cache. Each route has a unique ETag, so `/lorem-ipsum` and `/pokemon` have different cache entries, even if their ETags are identical.\n\nIn a CSR SPA, since there's only one HTML file, the same ETag is used for every page request. However, because the ETag is stored per route, the browser won't send an `If-None-Match` header for unvisited pages, leading to a `200` status and a redownload of the HTML, even though it's the same file.\n\nHowever, we can easily create our own (improved) implementation of this behavior through collaboration between the workers:\n\n_[scripts/inject-assets-plugin.js](scripts/inject-assets-plugin.js)_\n\n```diff\n+import { createHash } from 'node:crypto'\n\nclass InjectAssetsPlugin {\n  apply(compiler) {\n    .\n    .\n    .\n    compiler.hooks.afterEmit.tapAsync('InjectAssetsPlugin', (compilation, callback) =\u003e {\n      let worker = readFileSync(join(__dirname, '..', 'build', '_worker.js'), 'utf-8')\n      let html = readFileSync(join(__dirname, '..', 'build', 'index.html'), 'utf-8')\n      .\n      .\n      .\n+     const documentEtag = createHash('sha256').update(html).digest('hex').slice(0, 16)\n\n      worker = worker\n        .replace('INJECT_INITIAL_MODULE_SCRIPTS_STRING_HERE', () =\u003e JSON.stringify(initialModuleScriptsString))\n        .replace('INJECT_INITIAL_SCRIPTS_HERE', () =\u003e JSON.stringify(initialScripts))\n        .replace('INJECT_ASYNC_SCRIPTS_HERE', () =\u003e JSON.stringify(asyncScripts))\n        .replace('INJECT_HTML_HERE', () =\u003e JSON.stringify(html))\n+       .replace('INJECT_DOCUMENT_ETAG_HERE', () =\u003e JSON.stringify(documentEtag))\n\n      writeFileSync(join(__dirname, '..', 'build', '_worker.js'), worker)\n\n      callback()\n    })\n  }\n}\n```\n\n_[public/\\_worker.js](public/_worker.js)_\n\n```diff\n+const documentEtag = INJECT_DOCUMENT_ETAG_HERE\n.\n.\n.\nexport default {\n  fetch(request, env) {\n+   if (request.headers.get('If-None-Match') === documentEtag) {\n+     return new Response(null, { status: 304, headers: documentHeaders })\n+   }\n    .\n    .\n    .\n  }\n}\n```\n\n_[public/service-worker.js](public/service-worker.js)_\n\n```diff\n.\n.\n.\nconst getRequestHeaders = responseHeaders =\u003e ({\n  'If-None-Match': responseHeaders?.get('ETag') || responseHeaders?.get('X-ETag'),\n  'X-Cached': allAssets\n    .filter(asset =\u003e asset.endsWith('.js'))\n    .map(asset =\u003e asset.match(/(?\u003c=\\.)[^.]+(?=\\.js$)/)?.[0])\n    .join()\n})\n.\n.\n.\nconst precacheAssets = async ({ ignoreAssets }) =\u003e {\n  .\n  .\n  .\n+ await fetchDocument('/')\n}\n\nconst fetchDocument = async url =\u003e {\n  const cache = await getCache()\n  const cachedDocument = await cache.match('/')\n\n  try {\n    const response = await fetch(url, { headers: getRequestHeaders(cachedDocument?.headers) })\n\n    if (response.status === 304) return cachedDocument\n\n    cache.put('/', response.clone())\n\n    return response\n  } catch (err) {\n    return cachedDocument\n  }\n}\n```\n\n_Note that a custom `X-ETag` is included for situations where the CDN does not automatically send an `ETag`._\n\nNow our serverless worker will always respond with a `304 Not Modified` status code whenever there are no changes, even for unvisited pages.\n\n## Navigation Preload\n\nWhen a service worker is used, the browser delays sending the initial HTML document request until the service worker is loaded, which can cause a slight to moderate page delay depending on the hardware.\n\nThe native solution to this problem is called _[Navigation Preload](https://web.dev/blog/navigation-preload)_. We will implement this to ensure the document request is sent immediately, without waiting for the service worker to load:\n\n_[src/utils/service-worker-registration.ts](src/utils/service-worker-registration.ts)_\n\n```js\nconst register = () =\u003e {\n  .\n  .\n  .\n  navigator.serviceWorker?.addEventListener('message', async event =\u003e {\n    const { navigationPreloadHeader } = event.data\n\n    const registration = await navigator.serviceWorker.ready\n\n    registration.navigationPreload.setHeaderValue(navigationPreloadHeader)\n  })\n}\n```\n\n_[public/service-worker.js](public/service-worker.js)_\n\n```js\n.\n.\n.\nconst fetchDocument = async ({ url, preloadResponse }) =\u003e {\n  const cache = await getCache()\n  const cachedDocument = await cache.match('/')\n\n  try {\n    const response = await (preloadResponse \u0026\u0026 cachedDocument\n      ? preloadResponse\n      : fetch(url, { headers: getRequestHeaders(cachedDocument?.headers) }))\n\n    if (response.status === 304) return cachedDocument\n\n    cache.put('/', response.clone())\n\n    self.clients.matchAll({ includeUncontrolled: true }).then(([client]) =\u003e {\n      client?.postMessage({ navigationPreloadHeader: JSON.stringify(getRequestHeaders(response.headers)) })\n    })\n\n    return response\n  } catch (err) {\n    return cachedDocument\n  }\n}\n.\n.\n.\nself.addEventListener('activate', event =\u003e event.waitUntil(self.registration.navigationPreload?.enable()))\n.\n.\n.\nself.addEventListener('fetch', event =\u003e {\n  const { request, preloadResponse } = event\n\n  if (request.destination === 'document') return event.respondWith(fetchDocument({ url: request.url, preloadResponse }))\n  if (['font', 'script'].includes(request.destination)) event.respondWith(fetchAsset(request))\n})\n```\n\n_[public/\\_worker.js](public/_worker.js)_\n\n```js\n fetch(request, env) {\n  let { 'If-None-Match': etag, 'X-Cached': xCached } = JSON.parse(\n    request.headers.get('service-worker-navigation-preload') || '{}'\n  )\n\n  etag ||= request.headers.get('If-None-Match')\n\n  if (etag === documentEtag) return new Response(null, { status: 304, headers: documentHeaders })\n  .\n  .\n  .\n  xCached ||= request.headers.get('X-Cached')\n  .\n  .\n  .\n }\n```\n\nWith this implementation, the document request will be sent immediately, independent of the service worker.\n\n## Tweaking Further\n\n### Transitioning Async Pages\n\n_Note: requires React (v18), Svelte or Solid.js_\n\nWhen we split a page from the main app, we separate its render phase, meaning the app will render before the page renders.\n\u003cbr\u003e\nSo when we move from one async page to another, we see a blank space that remains until the page is rendered:\n\n![Before Page Render](images/before-page-render.png)\n![After Page Render](images/after-page-render.png)\n\nThis happens due to the common approach of wrapping only the routes with Suspense:\n\n```js\nconst App = () =\u003e {\n  return (\n    \u003c\u003e\n      \u003cNavigation /\u003e\n\n      \u003cSuspense\u003e\n        \u003cRoutes\u003e{routes}\u003c/Routes\u003e\n      \u003c/Suspense\u003e\n    \u003c/\u003e\n  )\n}\n```\n\nReact 18 introduced us to the `useTransition` hook, which allows us to delay a render until some criteria are met.\n\u003cbr\u003e\nWe will use this hook to delay the page's navigation until it is ready:\n\n_[useTransitionNavigate.ts](https://github.com/theninthsky/frontend-essentials/blob/main/src/hooks/useTransitionNavigate.ts)_\n\n```js\nimport { useTransition } from 'react'\nimport { useNavigate } from 'react-router-dom'\n\nconst useTransitionNavigate = () =\u003e {\n  const [, startTransition] = useTransition()\n  const navigate = useNavigate()\n\n  return (to, options) =\u003e startTransition(() =\u003e navigate(to, options))\n}\n\nexport default useTransitionNavigate\n```\n\n_[src/components/common/NavigationLink.tsx](src/components/common/NavigationLink.tsx)_\n\n```js\nconst NavigationLink = ({ to, onClick, children }) =\u003e {\n  const navigate = useTransitionNavigate()\n\n  const onLinkClick = event =\u003e {\n    event.preventDefault()\n    navigate(to)\n    onClick?.()\n  }\n\n  return (\n    \u003cNavLink to={to} onClick={onLinkClick}\u003e\n      {children}\n    \u003c/NavLink\u003e\n  )\n}\n\nexport default NavigationLink\n```\n\nNow async pages will feel like they were never split from the main app.\n\n### Revalidating Active Apps\n\nSome users leave the app open for extended periods of time, so another thing we can do is revalidate (download new assets of) the app while it is running:\n\n_[src/utils/service-worker-registration.ts](src/utils/service-worker-registration.ts)_\n\n```diff\n+ const REVALIDATION_INTERVAL_HOURS = 1\n\nconst register = () =\u003e {\n  window.addEventListener(\n    'load',\n    async () =\u003e {\n      try {\n        const registration = await navigator.serviceWorker.register('/service-worker.js')\n\n        console.log('Service worker registered!')\n\n        registration.addEventListener('updatefound', () =\u003e {\n          registration.installing?.postMessage({ inlineAssets: extractInlineScripts() })\n        })\n\n+       setInterval(() =\u003e registration.update(), REVALIDATION_INTERVAL_HOURS * 3600 * 1000)\n      } catch (err) {\n        console.error(err)\n      }\n    },\n    { once: true }\n  )\n}\n```\n\nThe code above revalidates the app every hour.\n\nThe revalidation process is extremely cheap, since it only involves refetching the service worker (which will return a _304 Not Modified_ status code if not changed).\n\u003cbr\u003e\nWhen the service worker **does** change, it means that new assets are available, and so they will be selectively downloaded and cached.\n\n## Summary\n\nWe split our bundle into many small chunks, greatly improving our app's caching abilities.\n\u003cbr\u003e\nWe split every page so that upon loading one, only what is relevant is being downloaded right away.\n\u003cbr\u003e\nWe've managed to make the initial (cacheless) load of our app extremely fast, everything that a page requires to load is dynamically injected to it.\n\u003cbr\u003e\nWe even preload the page's data, eliminating the famous data fetching waterfall that CSR apps are known to have.\n\u003cbr\u003e\nIn addition, we precache all pages, which makes it seem as if they were never split from the main bundle code.\n\u003cbr\u003e\n\nAll of these were achieved without compromising on the developer experience and without dictating which JS framework to choose.\n\n## Deploying\n\nThe biggest advantage of a static app is that it can be served entirely from a CDN.\n\u003cbr\u003e\nA CDN has many PoPs (Points of Presence), also called \"Edge Networks\". These PoPs are distributed around the globe and thus are able to serve files to every region **much** faster than a remote server.\n\nThe fastest CDN to date is Cloudflare, which has more than 250 PoPs (and counting):\n\n![Cloudflare PoPs](images/cloudflare-pops.png)\n\nhttps://speed.cloudflare.com\n\nhttps://blog.cloudflare.com/benchmarking-edge-network-performance\n\nWe can easily deploy our app using Cloudflare Pages:\n\u003cbr\u003e\nhttps://pages.cloudflare.com\n\n## Benchmark\n\nTo conclude this section, we will perform a benchmark of our app compared to _[Next.js](https://nextjs.org/docs/getting-started)_'s documentation site, which is **entirely SSG**.\n\u003cbr\u003e\nWe will compare the minimalistic _Accessibility_ page to our _Lorem Ipsum_ page. Both pages include ~246kb of JS in their render-critical chunks (preloads and prefetches that come after are irrelevant).\n\u003cbr\u003e\nYou can click on each link to perform a live benchmark.\n\n_[Accessibility | Next.js](https://pagespeed.web.dev/report?url=https%3A%2F%2Fnextjs.org%2Fdocs%2Faccessibility)_\n\u003cbr\u003e\n_[Lorem Ipsum | Client-side Rendering](https://pagespeed.web.dev/report?url=https%3A%2F%2Fclient-side-rendering.pages.dev%2Florem-ipsum)_\n\nI performed Google's _PageSpeed Insights_ benchmark (simulating a slow 4G network) about 20 times for each page and picked the highest score.\n\u003cbr\u003e\nThese are the results:\n\n![Next.js Benchmark](images/nextjs-benchmark.png)\n![Client-side Rendering Benchmark](images/client-side-rendering-benchmark.png)\n\nAs it turns out, performance is **not** a default in Next.js.\n\n_Note that this benchmark only tests the first load of the page, without even considering how the app performs when it is fully cached (where CSR really shines)._\n\n## Areas for Improvement\n\n- Compress assets using _[Brotli level 11](https://d33wubrfki0l68.cloudfront.net/3434fd222424236d1f0f5b4596de1480b5378156/1a5ec/assets/wp-content/uploads/2018/07/compression_estimator_jquery.jpg)_ (Cloudflare only uses level 4 to save on computing resources).\n- Use the paid _[Cloudflare Argo](https://blog.cloudflare.com/argo)_ service for even better response times.\n\n# SEO\n\n## Indexing\n\n### Google\n\nIt is a common minconception that Google is having trouble properly indexing CSR (JS) apps.\n\u003cbr\u003e\nThat might have been the case in 2017, but as of today: Google indexes CSR apps mostly flawlessly.\n\nIndexed pages will have a title, description, content and all other SEO-related attributes, as long as we remember to dynamically set them (either manually like [this](https://github.com/theninthsky/frontend-essentials/blob/main/src/components/Meta.tsx) or using a package such as _[react-helmet](https://www.npmjs.com/package/react-helmet)_).\n\nhttps://www.google.com/search?q=site:https://client-side-rendering.pages.dev\n\n![Google Search Results](images/google-search-results.png)\n![Google Lorem Ipsum Search Results](images/google-lorem-ipsum-search-results.png)\n\nGooglebot's ability the render JS can be easily demonstrated by performing a live URL test of our app in the _[Google Search Console](https://search.google.com/search-console)_:\n\n![Google Search Console Rendering](images/google-search-console-rendering.png)\n\nGooglebot uses the latest version of Chromium to crawl apps, so the only thing we should do is make sure our app loads fast and that it is quick to fetch data.\n\nEven when data takes long time to fetch, Googlebot, in most cases, will wait for it before taking a snapshot of the page:\n\u003cbr\u003e\nhttps://support.google.com/webmasters/thread/202552760/for-how-long-does-googlebot-wait-for-the-last-http-request\n\u003cbr\u003e\nhttps://support.google.com/webmasters/thread/165370285?hl=en\u0026msgid=165510733\n\nA detailed explanation of Googlebot's JS crawling process can be found here:\n\u003cbr\u003e\nhttps://developers.google.com/search/docs/crawling-indexing/javascript/javascript-seo-basics\n\nIf Googlebot fails to render some pages, it is mostly due to Google's unwillingness to spend the required resources to crawl the website, which means it has a low _[Crawl Budget](https://developers.google.com/search/blog/2017/01/what-crawl-budget-means-for-googlebot)_.\n\u003cbr\u003e\nThis can be confirmed by inspecting the crawled page (by clicking _View Crawled Page_ in the search console) and making sure all failed requests have the _Other error_ alert (which means those requests were intentionally aborted by Googlebot):\n\n![Google Search Console Insufficient Fetch Quota](images/google-search-console-insufficient-fetch-quota.png)\n\nThis should only happen to websites that Google deems to have no interesting content or have very low traffic (such as our demo app).\n\nMore information can be found here: https://support.google.com/webmasters/thread/4425254?hl=en\u0026msgid=4426601\n\n### Prerendering\n\nOther search engines such as Bing cannot render JS, so in order to have them crawl our app properly, we need to serve them **prerendered** version of our pages.\n\u003cbr\u003e\nPrerendering is the act of crawling web apps in production (using headless Chromium) and generating a complete HTML file (with data) for each page.\n\nWe have two options when it comes to prerendering:\n\n1. We can deploy our own prerender server using _[Prerender](https://github.com/prerender/prerender)_ (or my own _[Renderless](https://github.com/frontend-infra/renderless.git)_).\n2. We can use a dedicated service such as _[Prerender.io](https://prerender.io)_ which is very expensive but offers 1000 free prerenders a month.\n\n**Serverless prerendering is the recommended approach** since it can be very cheap, especially on _[GCP](https://cloud.google.com)_.\n\nThen we redirect web crawlers (identified by their `User-Agent` header string) to our prerenderer, using a Cloudflare Worker (for example):\n\n_[public/\\_worker.js](public/_worker.js)_\n\n```js\nconst BOT_AGENTS = ['bingbot', 'yandex', 'twitterbot', 'whatsapp', ...]\n\nconst fetchPrerendered = async ({ url, headers }, userAgent) =\u003e {\n  const headersToSend = new Headers(headers)\n\n  /* Custom Prerenderer */\n  const prerenderUrl = new URL(`${YOUR_PRERENDERER_URL}?url=${url}`)\n  /*************/\n\n  /* OR */\n\n  /* Prerender.io */\n  const prerenderUrl = `https://service.prerender.io/${url}`\n\n  headersToSend.set('X-Prerender-Token', YOUR_PRERENDER_IO_TOKEN)\n  /****************/\n\n  const prerenderRequest = new Request(prerenderUrl, {\n    headers: headersToSend,\n    redirect: 'manual'\n  })\n\n  const { body, ...rest } = await fetch(prerenderRequest)\n\n  return new Response(body, rest)\n}\n\nexport default {\n  fetch(request, env) {\n    const pathname = new URL(request.url).pathname.toLowerCase()\n    const userAgent = (request.headers.get('User-Agent') || '').toLowerCase()\n\n    // a crawler that requests the document\n    if (BOT_AGENTS.some(agent =\u003e userAgent.includes(agent)) \u0026\u0026 !pathname.includes('.')) {\n      return fetchPrerendered(request, userAgent)\n    }\n\n    return env.ASSETS.fetch(request)\n  }\n}\n\n```\n\nHere is an up-to-date list of all bot agnets (web crawlers): https://docs.prerender.io/docs/how-to-add-additional-bots#cloudflare. Remember to exclude `googlebot` from the list.\n\n_Prerendering_, also called _Dynamic Rendering_, is encouraged by _[Microsoft](https://blogs.bing.com/webmaster/october-2018/bingbot-Series-JavaScript,-Dynamic-Rendering,-and-Cloaking-Oh-My)_ and is heavily used by many popular websites including Twitter.\n\nThe results are as expected:\n\nhttps://www.bing.com/search?q=site%3Ahttps%3A%2F%2Fclient-side-rendering.pages.dev\n\n![Bing Search Results](images/bing-search-results.png)\n\n_Note that when using CSS-in-JS, we can [disable the speedy optimization](src/utils/disable-speedy.ts) during prerendering if we want to have our styles omitted to the DOM._\n\n### Social Media Share Previews\n\nWhen we share a CSR app link in social media, we can see that no matter what page we link to, the preview will remain the same.\n\u003cbr\u003e\nThis happens because most CSR apps have only one contentless HTML file, and social media crawlers do not render JS.\n\u003cbr\u003e\nThis is where prerendering comes to our aid once again, it will generate the proper share preview for each page:\n\n_**Whatsapp:**_\n\n![Whatsapp Share Preview](images/whatsapp-share-preview.png)\n\n_**Facebook**:_\n\n![Facebook Share Preview](images/facebook-share-preview.png)\n\n## Sitemaps\n\nIn order to make all of our app pages discoverable to search engines, it is recommended to create a `sitemap.xml` file which specifies all of our website routes.\n\nSince we already have a centralized _[pages.js](src/pages.js)_ file, we can easily generate a sitemap during build time:\n\n_[create-sitemap.js](scripts/create-sitemap.js)_\n\n```js\nimport { Readable } from 'stream'\nimport { writeFile } from 'fs/promises'\nimport { SitemapStream, streamToPromise } from 'sitemap'\n\nimport pages from '../src/pages.js'\n\nconst stream = new SitemapStream({ hostname: 'https://client-side-rendering.pages.dev' })\nconst links = pages.map(({ path }) =\u003e ({ url: path, changefreq: 'weekly' }))\n\nstreamToPromise(Readable.from(links).pipe(stream))\n  .then(data =\u003e data.toString())\n  .then(res =\u003e writeFile('public/sitemap.xml', res))\n  .catch(console.log)\n```\n\nThis will emit the following sitemap:\n\n```xml\n\u003c?xml version=\"1.0\" encoding=\"UTF-8\"?\u003e\n\u003curlset xmlns=\"http://www.sitemaps.org/schemas/sitemap/0.9\" xmlns:image=\"http://www.google.com/schemas/sitemap-image/1.1\" xmlns:news=\"http://www.google.com/schemas/sitemap-news/0.9\" xmlns:video=\"http://www.google.com/schemas/sitemap-video/1.1\" xmlns:xhtml=\"http://www.w3.org/1999/xhtml\"\u003e\n   \u003curl\u003e\n      \u003cloc\u003ehttps://client-side-rendering.pages.dev/\u003c/loc\u003e\n      \u003cchangefreq\u003eweekly\u003c/changefreq\u003e\n   \u003c/url\u003e\n   \u003curl\u003e\n      \u003cloc\u003ehttps://client-side-rendering.pages.dev/lorem-ipsum\u003c/loc\u003e\n      \u003cchangefreq\u003eweekly\u003c/changefreq\u003e\n   \u003c/url\u003e\n   \u003curl\u003e\n      \u003cloc\u003ehttps://client-side-rendering.pages.dev/pokemon\u003c/loc\u003e\n      \u003cchangefreq\u003eweekly\u003c/changefreq\u003e\n   \u003c/url\u003e\n\u003c/urlset\u003e\n```\n\nWe can manually submit our sitemap to _[Google Search Console](https://search.google.com/search-console)_ and _[Bing Webmaster Tools](https://www.bing.com/webmasters)_.\n\n# CSR vs. SSR\n\nAs mentioned above, an in-depth comparison of all rendering methods can be found here: https://client-side-rendering.pages.dev/comparison\n\n## Why Not SSG?\n\nWe have seen the advantages of static files: they are cacheable and can be served from a nearby CDN without requiring a server.\n\u003cbr\u003e\nThis might lead us to believe that SSG combines the benefits of both CSR and SSR: it makes our app visually load very fast (_[FCP](https://web.dev/fcp)_) and independently of our API server's response times.\n\u003cbr\u003e  \nHowever, in reality, SSG has a major limitation:\n\u003cbr\u003e\nSince JS isn't active during the initial moments, everything that relies on JS to be presented simply won't be visible or will be displayed incorrectly (like components that depend on the `window.matchMedia` function to render).\n\nA classic example of this issue can be seen on the following website:  \nhttps://death-to-ie11.com\n\nNotice how the timer isn’t visible immediately? That’s because it’s generated by JS, which takes time to download and execute.\n\nWe also see a similar issue when refreshing Vercel's 'Guides' page with some filters applied:\n\u003cbr\u003e\nhttps://vercel.com/guides?topics=analytics\n\nThis happens because there are `65536 (2^16)` possible filter combinations, and storing each combination as a separate HTML file would require a lot of server storage.\n\u003cbr\u003e\nSo, they generate a single `guides.html` file that contains all the data, but this static file doesn’t know which filters are applied until JS is loaded, causing a layout shift.\n\nIt’s important to note that even with _[Incremental Static Regeneration](https://nextjs.org/docs/pages/building-your-application/data-fetching/incremental-static-regeneration)_, users will still have to wait for a server response when visiting pages that have not yet been cached (just like in SSR).\n\nAnother example of this issue is JS animations—they might appear static initially and only start animating once JS is loaded.\n\nThere are many instances where this delayed functionality harms the user experience, such as when websites only show the navigation bar after JS is loaded (since they rely on Local Storage to check if a user info entry exists).\n\nAnother critical issue, especially for E-commerce websites, is that SSG pages may display outdated data (like a product's price or availability).\n\nThis is precisely why no major E-commerce website uses SSG.\n\n## The Cost of Hydration\n\nIt is a fact that under fast internet connection, both CSR and SSR perform great (as long as they are both optimized), and the higher the connection speed - the closer they get in terms of loading times.  \nHowever, when dealing with slow connections (such as mobile networks), it seems that SSR has an edge over CSR regarding loading times.\n\nSince SSR apps are rendered on the server, the browser receives the fully-constructed HTML file, and so it can show the page to the user without waiting for JS to download. When JS is eventually downloaded and parsed, the framework is able to \"hydrate\" the DOM with functionality (without having to reconstruct it).\n\nAlthough it seems like a big advantage, this behaviour introduces an undesired side-effect, especially on slower connections:  \nUntil JS is loaded, users can click wherever they desire, but the app won't react to any of their JS-based events.\n\nIt is a bad user experience when buttons don't respond to user interactions, but it becomes a much larger problem when default events are not being prevented.\n\nThis is a comparison between Next.js's website and our Client-side Rendering app on a fast 3G connection:\n\n![SSR Load 3G](images/ssr-load-3g.gif)\n![CSR Load 3G](images/csr-load-3g.gif)\n\nWhat happened here?\n\nSince JS hasn't been loaded yet, Next.js's website could not prevent the default behaviour of anchor tag elements (`\u003ca\u003e`) to navigate to another page, resulting in every click on them triggering a full page reload.  \n\u003cbr\u003e\nAnd the slower the connection is - the more severe this issue becomes.  \n\u003cbr\u003e\nIn other words, where SSR should have had a performance edge over CSR, we see a very \"dangerous\" behavior that might significantly degrade the user experience.\n\nIt is impossible for this issue to occur in CSR apps, since the moment they render - JS has already been fully loaded.\n\n# Conclusion\n\nWe saw that client-side rendering performance is on par and sometimes even better than SSR in terms of initial loading times (and far surpasses it in navigation times).\n\u003cbr\u003e\nWe’ve also seen that Googlebot can perfectly index client-side rendered apps, and that we can easily set up a prerender server to serve all other bots and crawlers.\n\u003cbr\u003e\nAnd most importantly, we have achieved all this just by adding a few files and using a prerender service, so every existing CSR app should be able to quickly and easily implement these changes and benefit from them.\n\nThese facts lead to the conclusion that there is no compelling reason to use SSR. Doing so would only add unnecessary complexity and limitations to our app, degrading both the developer and user experience, while also incurring higher server costs.\n\n## What Might Change in the Future\n\nAs time passes, [connection speeds are getting faster](https://www.speedtest.net/global-index) and end-user devices are becoming more powerful. As a result, the performance differences between various website rendering methods are guaranteed to diminish further (except for SSR, which still depends on API server response times).\n\nA new SSR method called _Streaming SSR_ (in React, this is through \"Server Components\") and newer frameworks like Qwik are capable of streaming responses to the browser without waiting for the API server’s response.\nHowever, there are also newer and more efficient CSR frameworks like Svelte and Solid.js, which have much smaller bundle sizes and are significantly faster than React (greatly improving FCP on slow networks).\n\nNevertheless, it's important to note that nothing will ever outperform the instant page transitions that client-side rendering provides, nor the simple and flexible development flow it offers.\n","funding_links":[],"categories":["TypeScript"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftheninthsky%2Fclient-side-rendering","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftheninthsky%2Fclient-side-rendering","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftheninthsky%2Fclient-side-rendering/lists"}