https://github.com/bayer-group/colid-indexing-crawler-service
The Indexing Crawler Service (ICS) repository is part of the Corporate Linked Data Catalog - short: COLID - application. It is responsible to extract data from a RDF storage system, transform and enrich the data and finally to send it via a message queue to the DMP Webservice for indexing.
https://github.com/bayer-group/colid-indexing-crawler-service
cloud-native colid data-catalog data-catalogue elasticsearch fair fair-data findable linked-data rdf shacl triplestore
Last synced: about 1 month ago
JSON representation
The Indexing Crawler Service (ICS) repository is part of the Corporate Linked Data Catalog - short: COLID - application. It is responsible to extract data from a RDF storage system, transform and enrich the data and finally to send it via a message queue to the DMP Webservice for indexing.
- Host: GitHub
- URL: https://github.com/bayer-group/colid-indexing-crawler-service
- Owner: Bayer-Group
- License: bsd-3-clause
- Created: 2020-07-07T14:40:06.000Z (almost 5 years ago)
- Default Branch: master
- Last Pushed: 2024-12-12T15:15:18.000Z (5 months ago)
- Last Synced: 2025-03-26T08:11:49.220Z (about 2 months ago)
- Topics: cloud-native, colid, data-catalog, data-catalogue, elasticsearch, fair, fair-data, findable, linked-data, rdf, shacl, triplestore
- Language: C#
- Homepage: https://bayer-group.github.io/COLID-Documentation/
- Size: 137 KB
- Stars: 5
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
# COLID Indexing Crawler Service
The Indexing Crawler Service is part of the Corporate Linked Data Catalog - short: COLID - application.
[Here](https://bayer-group.github.io/COLID-Documentation/#/?id=introduction) you can find an introduction to the application.
A description of all its functions is [here](https://bayer-group.github.io/COLID-Documentation/#/functional-specification).The complete guide can be found at the following [link](https://bayer-group.github.io/COLID-Documentation/#/).
The Indexing Crawler Service (ICS) is responsible to extract data from a RDF storage system, transform and enrich the data and finally to send it via a message queue to the DMP Webservice for indexing.
## Getting Started
### Demo
Want to see the COLID application in action and play around with [API](https://bayer-group.github.io/COLID-Documentation/#/)? The quickest way to get started is to checkout our [setup](https://github.com/Bayer-Group/COLID-Setup) repository.
### Building
A complete guide can be found at the following [link.](https://bayer-group.github.io/COLID-Documentation/#/) It describes a short guide for the build of the application.
### Running
A complete guide can be found at the following [link.](https://bayer-group.github.io/COLID-Documentation/#/) It describes a short guide to run the application.
## How to Contribute
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.