Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/alufers/hn-offline
An offline-first client for hackernews that even fetches the article contents for reading on the go.
https://github.com/alufers/hn-offline
hackernews indexeddb lightweight offline preact pwa react
Last synced: 26 days ago
JSON representation
An offline-first client for hackernews that even fetches the article contents for reading on the go.
- Host: GitHub
- URL: https://github.com/alufers/hn-offline
- Owner: alufers
- License: agpl-3.0
- Created: 2019-07-09T22:32:18.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2022-12-10T21:55:51.000Z (about 2 years ago)
- Last Synced: 2024-10-11T20:31:01.841Z (4 months ago)
- Topics: hackernews, indexeddb, lightweight, offline, preact, pwa, react
- Language: TypeScript
- Homepage: https://hn-offline.albert-koczy.com/
- Size: 2.39 MB
- Stars: 2
- Watchers: 2
- Forks: 1
- Open Issues: 51
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# hn-offline
![A banner with three mobile screenshots of hn-ffline side by side and some descriptions](assets/banner.png)
**[See it in action](https://hn-offline.albert-koczy.com/)**
hn-offline is an offline-first client for hackernews that even fetches the article contents for reading on the go. Firstly it downloads the data from HackerNews via the API along with the article contents scraped using [Readability](https://github.com/mozilla/readability) (the library used by the reader mode in Firefox). Everything is then saved into IndexedDB via a service worker for later consumption when offline.
# Tech stack
The app uses [preact](https://preactjs.com/) for rendering the front-end. The whole thing is written in TypeScript. In addition I use css-modules with less for better minification of class names. Everything is held together in a bundle with webpack. On every commit to master the app is deployed and hosted using now [now](https://zeit.co/now).
# Caveats
Currently the `readability-proxy` is hosted on [now](https://zeit.co/now) using a lambda which means that the articles are fetched with a simple http client. This causes some websites (most notably [bloomberg](https://www.bloomberg.com/europe)) to block the proxy and show a captcha which cannot be solved by it.
Additionally the proxy currently does not relay images, it only hotlinks them. This means that when using the app offline no pictures will be displayed. I plan on adding images to the scraper but only for a self-hosted version since I have no resources to pay for the bandwidth. Sorry for that.
# Roadmap
- [ ] Add better content to the sync dropdown
- [ ] Add sane error handling with retries
- [ ] Add links to the original article when viewing the offline version
- [ ] Dark theme
- [ ] Images in readability-proxy.
- [ ] Caching of comments based of descendants count# License
GNU AFFERO GENERAL PUBLIC LICENSE. See [LICENSE](license).