Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/cagov/site-performance-review
Endpoint which analyzes all pages of sites for their web performance profile
https://github.com/cagov/site-performance-review
Last synced: 2 months ago
JSON representation
Endpoint which analyzes all pages of sites for their web performance profile
- Host: GitHub
- URL: https://github.com/cagov/site-performance-review
- Owner: cagov
- License: mit
- Created: 2023-04-19T17:50:52.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-01-05T22:23:47.000Z (about 1 year ago)
- Last Synced: 2024-04-17T05:02:21.904Z (9 months ago)
- Language: JavaScript
- Size: 76.2 KB
- Stars: 2
- Watchers: 5
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# site-performance-review
Endpoint which analyzes all pages of sites for their web performance profileCurrently only setup to run against ODI site
Process is:
- Pull sitemap from innovation.ca.gov/sitemap.xml
- Review all pages in there against dynamodb database that recorded last url set
- If a new page or a page that has a new lastmod date is discovered run a new performance analysis
- Record updated performance analysis in dynamodb
- The performance analysis is a really slow process, takes several seconds per url.
- There are 95 urls in the ODI site at the time of this writing. Those cannot all be analyzed in less than the 15 minute timeout on a lambda (processing time per URL is 9-20 seconds) but most of them don't change that often so running the expensive performance analysis on the bulk of them will rarely be necessary.This code is setup to run on a schedule every four hours.
It is deployed to an AWS production environment via:
```
npx arc deploy --production
```Which put it at https://qdrfvq20o2.execute-api.us-west-1.amazonaws.com
It was not deployed to a staging environment. That wouldn't cause problems but would duplicate the same activity as production on the same schedule in a different dynamodb that is not read by anything.
This can run locally if the local "dynamodb instance" is populated.
The 11ty build process retrieves the latest performance information from the get endpoint sending the site domain as a url parameter so we always retrieve the latest performance data.
## Development notes
This service was created to power the performance measurement now displayed in the footer of all pages on the ODI site.
The 11ty _data file calls the read endpoint on this service to get all the performance readings for the site at once during a build.
I think this addition to the site dovetails nicely with the equity order and ODI's mandate to provide training because we have seen deficiencies throughout the state in these metrics and the ODI team has the expertise to advise on how to make improvements.
Displaying the stats is cool because we are being transparent but you can see that there are links under each number to explain why that metric is important, how we determine it and how you can improve it in your own web service.
I created this as an external service when I discovered that I couldn't perform this audit fast enough using lighthouse to get it done during each production build. It takes several seconds for each page so would add over 10 minutes to the build for ODI's small site.
I tried several things when building this service starting with lighthouse as embedded in 11ty creator Zach Leatherman's performance-leaderboard modules.
There were several problems running these tools which use lighthouse locally inside the AWS Lambda environment, some of them created bundles too large, the total runtime took too long. I ended up switching to making this hit google's lighthouse page speed API to get a reading instead. Unfortunately this analysis is not guaranteed to be performed from the US and we can't effectively warm the cache on ODI site's low traffic pages so you see slightly lower performance scores than what we get when we run lighthouse against the site using dev tools on our own machines but it is still a fair analysis.
If we were to expand this service in the future we might want to try:
- Going back to running lighthouse locally either directly or with performance-leaderboard instead of hitting the google pagespeed API
- Running this on a constantly available server instead of lambda so we aren't limited in total code bundle size and can run longer analyses safely