{"id":18331034,"url":"https://github.com/dataquestio/twitter-scrape","last_synced_at":"2025-05-02T22:31:03.144Z","repository":{"id":49838221,"uuid":"58399470","full_name":"dataquestio/twitter-scrape","owner":"dataquestio","description":"Download streaming tweets that match specific keywords, and dump the results to a file.","archived":false,"fork":false,"pushed_at":"2020-10-07T10:31:09.000Z","size":3,"stargazers_count":162,"open_issues_count":13,"forks_count":100,"subscribers_count":19,"default_branch":"master","last_synced_at":"2025-04-07T07:42:48.604Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/dataquestio.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2016-05-09T18:38:45.000Z","updated_at":"2025-01-13T07:14:48.000Z","dependencies_parsed_at":"2022-09-16T07:50:55.684Z","dependency_job_id":null,"html_url":"https://github.com/dataquestio/twitter-scrape","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dataquestio%2Ftwitter-scrape","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dataquestio%2Ftwitter-scrape/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dataquestio%2Ftwitter-scrape/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dataquestio%2Ftwitter-scrape/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/dataquestio","download_url":"https://codeload.github.com/dataquestio/twitter-scrape/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252116082,"owners_count":21697304,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-05T19:27:47.433Z","updated_at":"2025-05-02T22:30:59.188Z","avatar_url":"https://github.com/dataquestio.png","language":"Python","readme":"# Twitter Scrape\n\nScrape tweets from twitter into a DB.  Convert the DB to a CSV file.\n\n## Installation\n\n* `pip install -r requirements.txt`\n\n## Setup\n\n* Create a file called `private.py`.\n* Sign up for a Twitter [developer account](https://dev.twitter.com/).\n* Create an application [here](https://apps.twitter.com/).\n* Set the following keys in `private.py`.  You can get these values from the app you created:\n    * `TWITTER_KEY`\n    * `TWITTER_SECRET`\n    * `TWITTER_APP_KEY`\n    * `TWITTER_APP_SECRET`\n* Set the following key in `private.py`.\n    * `CONNECTION_STRING` -- use `sqlite:///tweets.db` as a default if you need to.  It's recommended to use postgresql, but not necessary.\n\n## Usage\n\n* `python scrape.py` to scrape.  Use `Ctrl + C` to stop.\n* `python dump.py` to generate `tweets.csv`, which contains all the tweet data that was scraped.\n* If you want to edit behavior, change settings in `settings.py`.","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdataquestio%2Ftwitter-scrape","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdataquestio%2Ftwitter-scrape","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdataquestio%2Ftwitter-scrape/lists"}