Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/melvynator/ELK_twitter
This is a data pipeline for Twitter (ETL) using the elastic stack Elasticsearch, Logstash and Kibana (version 6.1)
https://github.com/melvynator/ELK_twitter
data-collection data-visualization elasticsearch elk elk-stack kibana logstash machine-learning natural-language-processing twitter twitter-api
Last synced: 2 months ago
JSON representation
This is a data pipeline for Twitter (ETL) using the elastic stack Elasticsearch, Logstash and Kibana (version 6.1)
- Host: GitHub
- URL: https://github.com/melvynator/ELK_twitter
- Owner: melvynator
- License: apache-2.0
- Created: 2017-08-29T10:34:23.000Z (about 7 years ago)
- Default Branch: master
- Last Pushed: 2018-02-19T12:28:56.000Z (over 6 years ago)
- Last Synced: 2024-06-23T01:45:28.982Z (5 months ago)
- Topics: data-collection, data-visualization, elasticsearch, elk, elk-stack, kibana, logstash, machine-learning, natural-language-processing, twitter, twitter-api
- Homepage:
- Size: 6.59 MB
- Stars: 58
- Watchers: 7
- Forks: 26
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Out of the box Twitter pipeline using the Elastic stack (ELK)
## Contributing
This repository is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
All contributions are welcome: ideas, pull requests, issues, documentation improvement, complaints.
## Summary
#### [+ Introduction](#introduction)
#### [+ Getting started](#getting-started)
#### [+ Requirements](#requirements)
#### [+ Ressources](#ressources)## Introduction
This repository aims to provide a fully working "out-of-the-box" data pipeline for doing Machine learning on Twitter data using the ELK (Elasticsearch, Logstash, and Kibana) stack.
If you are not familiar with Logstash you may want to follow this [tutorial](https://github.com/melvynator/Logstash_tutorial/blob/master/README.md) first.
After having installed ELK you should be able in 5 minutes to visualize dashboard like the following:
The offered pipeline can be modelized by the following flow chart:
![alt text](https://github.com/melvynator/ELK_twitter/blob/master/img/pipeline.png "Pipeline")
Here are some slides that present the logstash part of the pipeline: https://www.slideshare.net/hypto/machine-learning-in-a-twitter-etl-using-elk .
Let's have a look to the different part that are covered by this pipeline:
### Concerning the Logstash part
___
#### Input
The input used is Twitter, you can use it to track users or keywords or tweets in a specific location.
#### Filter
A lot of filters are applied and they are in charge of the following tasks:
* Remove depreciated field
* Divide the tweet in two or three events (users and tweet)
* Flatten the JSON
* Remove the fields not used#### Output
Two output are defined:
* Elasticsearch: To allow a better search of your data
* MongoDB: To store your data### Concerning the Elasticsearch part
____#### Mapping
A mapping is provided and offers the following:
* A parent/child relationship between the tweet author and their tweets
* On text fields (Tweet content, User description, User location):
* 3 Analyzers
* Storing of the term vectors (For the 3 analyzers)
* Storing of the token numbers (For the 3 analyzers)
* One geofield to locate the provenance of the tweet (if available)
* Many "keyword", "integer" field to all allow data filteringThe 3 analyzers are:
1. Standard
1. English
1. A custom analyzer that keeps emoticons and punctuations, which is useful for sentimental and emotion analysisThe mapping is not dynamic, Twitter having a lot of fields that are not (or poorly) documented, it avoid data polution and keep only the wanted data.
### Concerning the Kibana part
____On Kibana side the repository offer:
* A dashboard for general data visualization
* A dashboard for comparison between a positive and negative tweet
* Different kind of visualizations### Machine learning
____Logstash make it simple to integrate machine learning model directly into your pipeline using the rest filter. A small "API" has been created to give you an idea about how you can use the rest filter in order to "label" your tweet on the fly before indexation. You can find this toy API here:
https://github.com/melvynator/toy_sentiment_API
The model is a dummy model but you can easily introduce your own complex model on the form of such API.
## Requirements
For the pipeline to work, you need a Twitter developer account, which you can obtain here: https://dev.twitter.com/resources/signup
### Linux users
This guide assumes that you have already installed [Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html), [Logstash](https://www.elastic.co/guide/en/logstash/current/installing-logstash.html) and [Kibana](https://www.elastic.co/guide/en/kibana/current/install.html). All three need to be installed properly in order to use this pipeline.
Once having installed ELK, here are some instructions to configure Elasticsearch to start automatically when the system boots up.
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.serviceElasticsearch can be started and stopped as follows:
sudo systemctl start elasticsearch.service
sudo systemctl stop elasticsearch.service(Note that the same steps can be used for Kibana and Logstash)
### Mac users
```
brew install elasticsearch
brew install logstash
brew install kibana
```## Getting started
Clone the repository:
`git clone https://github.com/melvynator/ELK_twitter.git`
### Setting up Elasticsearch
____Make sure that you don't have an index `twitter` already present.
### Setting up your Machine Learning API
____:warning: **If you don't have the need to make any API call you can skip this part** :warning:
:warning: **If you have your own API you can skip this part** :warning:
Download the toy API:
`git clone https://github.com/melvynator/toy_sentiment_API`
Go into the main repository and create a virtual environement:
cd toy_sentiment_API
virtualenv -p python3 venv
source venv/bin/activateThen install Flask and Scikit-Learn (For the machine learning)
`pip install -r requirements.txt`
Then you can launch your local server:
python sentiment_server.py
### Setting up Logstash
___To start configuring your logstash you have to open the configuration file:
`ELK_twitter/src/twitter-pipeline/config/twitter-pipeline.conf`
Replace the `` by your corresponding twitter key:
consumer_key => ""
consumer_secret => ""
oauth_token => ""
oauth_token_secret => ""Now go into `twitter-pipeline`:
`cd ../src/twitter-pipeline`
Make sure that Elasticsearch is started and run on the port `9200`.
In addition, you also have to manually install the following plugins for Logstash:
:warning: **If you don't have the need to make any API call you don't have to install the REST Plugin** :warning:
:warning: **If you don't want to use mongoDB you don't have to install the MongoDB Plugin** :warning:
1. [MongoDB](https://github.com/logstash-plugins/logstash-output-mongodb) for Logstash (Allow you to store your data into mongoDB)
`sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-mongodb`
2. [REST](https://github.com/lucashenning/logstash-filter-rest) for Logstash (Allow you to make API call)
`sudo /usr/share/logstash/bin/logstash-plugin install logstash-filter-rest`:warning: **By default, the pipeline is only configured to output to Elasticsearch**, but if you have MongoDB installed, then you can uncomment the mongo output in the config file:
`ELK_twitter/src/twitter-pipeline/config/twitter-pipeline.conf`:warning: **By default, the pipeline is not configured to make API call**, if you have an API you can uncomment the `rest` filter in the config file:
`ELK_twitter/src/twitter-pipeline/config/twitter-pipeline.conf`Don't forget to specify your own endpoint and data.
Then, you can run the pipeline using:
`sudo /usr/share/logstash/bin/logstash -f config/twitter-pipeline.conf`
Or define logstash in your `SYSTEM_PATH` and run the following:
`logstash -f config/twitter-pipeline.conf`
You should see some logs that end up with:
`Successfully started Logstash sentiment_service endpoint {:port=>9600}`
### Setting up Kibana
___Now go to Kibana: http://localhost:5601/
*Management => Index Patterns => Create Index Pattern*
Into the text box `Index name or pattern` type: `twitter`
Into the drop down box `Time Filter field name` choose: `inserted_in_es_at`
Click on create
Now go to:
*Management => Saved Objects => import*
And select the file in:
`ELK_twitter/src/twitter-pipeline/kibana-visualization/kibana_charts.json`
You can now go to *Dashboard*
This gif summarize the different step if you are lost.
![alt text](https://github.com/melvynator/ELK_twitter/blob/master/img/kibana_config.gif "Summary")
## Ressources
Thanks to stackoverflow community and Elastic community for the answer provided.
https://www.elastic.co/guide/en/logstash/current/introduction.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html