Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/dnlbauer/griesheim-transparent.de
http://griesheim-transparent.de: Volltextsuche Ratsinfosystem Griesheim
https://github.com/dnlbauer/griesheim-transparent.de
Last synced: 30 days ago
JSON representation
http://griesheim-transparent.de: Volltextsuche Ratsinfosystem Griesheim
- Host: GitHub
- URL: https://github.com/dnlbauer/griesheim-transparent.de
- Owner: dnlbauer
- License: mit
- Created: 2023-05-01T07:49:01.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-09-05T14:28:45.000Z (2 months ago)
- Last Synced: 2024-09-06T00:26:39.730Z (2 months ago)
- Language: Python
- Homepage:
- Size: 2.95 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
![actions](https://github.com/dnlbauer/griesheim-transparent.de/actions/workflows/build.yml/badge.svg?branch=main)
# griesheim-transparent.de
Repository for [http://griesheim-transparent.de](http://griesheim-transparent.de) - A search engine for documents of the Griesheim (Germany, Hessen, Darmstadt-Dieburg)
city parliament.## Modules
1) **scraper**: Scrapy-based webscraper for the "Ratsinformationssystem" aka. sessionnet https://sessionnet.owl-it.de/griesheim/bi/info.asp
2) **frontend**: Django Frontend and management jobs for analysis
3) **solr**: Solr Search Platform configuration
## Additional services
The full service is build on several microservices required at indexing time and for data storage (see [docker compose file](deployment/dev.yaml) for details):
- A postgresql database to store scraped metadata.
- [Tika](https://hub.docker.com/r/apache/tika) for pdf metadata extraction, text extraction and OCR.
- [fpurchess/preview-service](https://hub.docker.com/r/fpurchess/preview-service) for thumbnail generation.
- [gotenberg](https://hub.docker.com/r/gotenberg/gotenberg) to convert different file formats to pdf.## Workflow:
- The scraper docker image runs a cron job to scrape the sessionnet regulary and stores metadata+binary files stored to postgresql and the datastore.
- The frontend management task is also run in a cron job to periodically sync the scraped data into the solr index for searching. This includes:
- Combining metadata from scraped meetings, meeting agendas, consultations etc.
- Converting non-pdfs to pdf
- Extracting document metadata and content from pdfs with pdfact, tika and/or tika+tesseract (ocr)
- Generating preview images with the preview-service
- The frontend django app makes the data available to the user by queyring the solr search platform## License
MIT