Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/dataquestio/twitter-scrape
Download streaming tweets that match specific keywords, and dump the results to a file.
https://github.com/dataquestio/twitter-scrape
Last synced: about 1 month ago
JSON representation
Download streaming tweets that match specific keywords, and dump the results to a file.
- Host: GitHub
- URL: https://github.com/dataquestio/twitter-scrape
- Owner: dataquestio
- Created: 2016-05-09T18:38:45.000Z (over 8 years ago)
- Default Branch: master
- Last Pushed: 2020-10-07T10:31:09.000Z (about 4 years ago)
- Last Synced: 2024-11-05T19:34:18.793Z (about 2 months ago)
- Language: Python
- Homepage:
- Size: 2.93 KB
- Stars: 161
- Watchers: 20
- Forks: 100
- Open Issues: 13
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Twitter Scrape
Scrape tweets from twitter into a DB. Convert the DB to a CSV file.
## Installation
* `pip install -r requirements.txt`
## Setup
* Create a file called `private.py`.
* Sign up for a Twitter [developer account](https://dev.twitter.com/).
* Create an application [here](https://apps.twitter.com/).
* Set the following keys in `private.py`. You can get these values from the app you created:
* `TWITTER_KEY`
* `TWITTER_SECRET`
* `TWITTER_APP_KEY`
* `TWITTER_APP_SECRET`
* Set the following key in `private.py`.
* `CONNECTION_STRING` -- use `sqlite:///tweets.db` as a default if you need to. It's recommended to use postgresql, but not necessary.## Usage
* `python scrape.py` to scrape. Use `Ctrl + C` to stop.
* `python dump.py` to generate `tweets.csv`, which contains all the tweet data that was scraped.
* If you want to edit behavior, change settings in `settings.py`.