Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/wangrui6/Zhihu-KOL
https://github.com/wangrui6/Zhihu-KOL
Last synced: about 1 month ago
JSON representation
- Host: GitHub
- URL: https://github.com/wangrui6/Zhihu-KOL
- Owner: wangrui6
- Created: 2023-02-25T05:49:29.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2023-05-12T11:40:31.000Z (over 1 year ago)
- Last Synced: 2024-08-01T22:41:02.152Z (4 months ago)
- Language: Jupyter Notebook
- Size: 3.77 MB
- Stars: 20
- Watchers: 1
- Forks: 3
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Changelog: change_vpn_loc.sh
Awesome Lists containing this project
- Awesome-LLMs-Datasets - Github - KOL)** (Instruction Fine-tuning Datasets / General Instruction Fine-tuning Datasets)
- StarryDivineSky - wangrui6/Zhihu-KOL - assistant.io/ )的数据集。 (NLP语料和数据集 / 大语言对话模型及数据)
README
Data Scraping Project for Zhihu Dataset
A simple project to scrape data from Zhihu.
## About The Project
This project provides a way to scrape data from Zhi Hu site. We use this project to scrape dataset for Open Assistant LLM project (https://open-assistant.io/).Use the `scrape_process.py` to get started.
### Built With
The project in is Python.
We primarily use
* [Playwright](https://playwright.dev/) as headless browser,
* [Ray](https://www.ray.io/) for parallel processing
* [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) for html extraction
* [DuckDB](https://duckdb.org/) for data persistance## Getting Started
### Installation
Install all dependencies
* pip
```sh
pip install -r requirements.txt
```* Install playwright
```sh
playwright install
```
* Run the following commands to install necessary library if needed
```sh
sudo apt install libatk1.0-0 libatk-bridge2.0-0 libcups2 libatspi2.0-0 libxcomposite1 libxdamage1 libxfixes3 libxrandr2 libgbm1 libxkbcommon0 libpango-1.0-0 libcairo2 libasound2
```## Useful concept
To understand how we scrape data from Zhi Hu,
it is useful to define the hierarchy of question-answer categorization.
Currently we work on 4 level of abstraction categorization.
Each level of abstraction contains multiple different instance of abstraction in the next level, i.e. 1 Common Topic might contains hundreds of unique questions and 1 question can contain hundreds of unique answers.An example of such categorization is shown below.
![zhihu_answer_categorization](zhihu_question_hierarchy.png)Currently, the scraping process consists of 3 independent scraping process that scrapes 3 different level of categorization.
Each scraping process are independent of each other and only requires input from local files to start working.
We provide a list of base topic to initialise the scraping process. The first process scrape_common_topic will scrape common topic for all base topic. All common topic will be saved periodically to local file system.
The second process is the heaviest process in the system as it utilized headless browser to scrape answer url.
This process also utilize ray for parallel processing to speed up the entire chain of processes. The question-answer-url will be saved periodically to local file system. The last process aims to download the exact answer using API request. Shadow socks 5 proxies are used to bypass API rate limit. The extracted answers will be persisted into database.![process_architecture](scrape_process.png)
## Roadmap
- [x] Decouple all scraping processes
- [x] Add Ray For Parallel Processing
- [x] Add shadow socks for API rate limit bypass
- [x] Add duckdb for persistence
- [ ] Scrape comment sections
- [ ] Scale up scraping processes## License
Distributed under the MIT License. See `LICENSE.txt` for more information.