Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/kamadorueda/league-of-legends-datapipeline
A python data pipeline for the League of Legends game.
https://github.com/kamadorueda/league-of-legends-datapipeline
datascience league-of-legends pipeline
Last synced: 11 days ago
JSON representation
A python data pipeline for the League of Legends game.
- Host: GitHub
- URL: https://github.com/kamadorueda/league-of-legends-datapipeline
- Owner: kamadorueda
- License: gpl-3.0
- Created: 2019-02-10T23:06:04.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2019-02-17T05:07:34.000Z (almost 6 years ago)
- Last Synced: 2024-11-07T03:45:22.287Z (2 months ago)
- Topics: datascience, league-of-legends, pipeline
- Language: Python
- Homepage: https://github.com/kamadorueda/League-of-Legends-Datapipeline
- Size: 35.2 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# League-of-Legends-Datapipeline
A python data pipeline for the League of Legends game.It will fetch sequentially:
- match information: `/lol/match/v4/matches/{matchId}`
- match timeline: `/lol/match/v4/timelines/by-match/{matchId}`And produces an stream of formatted output of CSV schemas and records.
The CSV can later be loaded to a data warehouse
like Amazon Redshift, a MySQL instance, among others.# How to use
First create an initial `state.json` file.```json
{
"match_id": 1000000000,
"region": "na1",
"api_token": "RGAPI-XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}
````match_id` indicates the match ID from which to start fetching.
The data pipeline reads the state from stdin:
```bash
cat state.json | ./datapipeline.py
```And output a final state to stdout:
```bash
(cat state.json | ./datapipeline.py) > new_state.json
```if successful the new state will be some match IDs ahead of the initial state, and the data will be stored locally.
if not successful the new state will be the same initial state.
Putting it all together:
```bash
# if the pipeline went well
if (cat state.json | ./datapipeline.py) > new_state.json; then
# update the state
mv new_state.json state.json
# condensate everything
./condensate.py
fi
```This will use the state file that you just created,
produce output that can be loaded to your data warehouse
(or rendered locally with any CSV visor like LibreOffice)
and update the statefile if succeded.Run this process in a loop to continually fetch data.