Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/michaelcurrin/scroll-and-scrape
Store tweets from a Twitter search results, using browser scraping
https://github.com/michaelcurrin/scroll-and-scrape
python python3 scraping selenium tweets twitter twitter-search webscraping
Last synced: 9 days ago
JSON representation
Store tweets from a Twitter search results, using browser scraping
- Host: GitHub
- URL: https://github.com/michaelcurrin/scroll-and-scrape
- Owner: MichaelCurrin
- License: mit
- Created: 2019-07-07T21:16:29.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2024-07-13T07:21:27.000Z (4 months ago)
- Last Synced: 2024-10-11T19:36:31.362Z (26 days ago)
- Topics: python, python3, scraping, selenium, tweets, twitter, twitter-search, webscraping
- Language: HTML
- Homepage:
- Size: 41 KB
- Stars: 4
- Watchers: 2
- Forks: 0
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Scroll and Scrape
> Store tweets from a Twitter search results, using browser scraping[![GitHub tag](https://img.shields.io/github/tag/MichaelCurrin/scroll-and-scrape?include_prereleases=&sort=semver)](https://github.com/MichaelCurrin/scroll-and-scrape/releases/)
[![License](https://img.shields.io/badge/License-MIT-blue)](#license)[![Made with Python](https://img.shields.io/badge/Python->=3.6-blue?logo=python&logoColor=white)](https://python.org)
[![Made for Twitter](https://img.shields.io/badge/Made_for-Twitter-blue?logo=twitter&logoColor=white)](https://twitter.com)
[![dependency - selenium](https://img.shields.io/badge/dependency-selenium-blue)](https://pypi.org/project/selenium)
[![dependency - beautifulsoup4](https://img.shields.io/badge/dependency-beautifulsoup4-blue)](https://pypi.org/project/beautifulsoup4)## Purpose
This application uses Python and the browser (controlled through Selenium) to load a page and scrape the contents and save the data. The approach here loads the DOM (not possible when using plain HTTP request) and also adds in waiting and scrolling logic so that dynamic elements can be pulled in.
This project is aimed at scraping Tweets on a search, where scrolling is needed to load more. It could be used for a user timeline though.
## Requirements
- [Python 3](https://python.org/)
- Chrome browser## Clone
```sh
$ git clone [email protected]:MichaelCurrin/scroll-and-scrape.git
$ cd scroll-and-scrape
```## Installation
Install Python 3.
Create a virtual environment and activate it.
Install project packages:
```sh
$ pip install -r requirements.txt
$ pip install -r requirements-dev.txt
```## Usage
```sh
$ cd scrollscrape
$ python main.py
```## Background
This is a simple Python 3 application, based on existing scripts which are included in the [research](/research/) directory. See also [gist](https://gist.github.com/artjomb/07209e859f9bf0206f76).
The goal is to get all the Twitter tweets for a search query, going back as far as possible.
Using the Twitter API is restrictive - it only gives a week worth of data. Note that to keep this application simple, only the tweet ID needs to be stored and none of the tweet or author data. Since, once you have a tweet ID no matter how old, you can use the Twitter API to get all tweet and author data for a given tweet ID.
## Future development
So far the main script works okay, but it not getting all known tweets for a particular known so it needs improvement. The format can be improved - for now it is printing to stdout which can be redirected to a text file.
Also, this could be improved for efficiency using a headless browser. It could also use a more modern library such as [requests-html](https://github.com/kennethreitz/requests-html).
## License
Released under [MIT](/LICENSE) by [@MichaelCurrin](https://github.com/MichaelCurrin).