Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/0000xffff/4chan-dl
download media files from 4chan.org with thier posted filenames
https://github.com/0000xffff/4chan-dl
4chan 4chan-downloader beautifulsoup4 colorama download downloader python python3 scape scraper
Last synced: about 2 months ago
JSON representation
download media files from 4chan.org with thier posted filenames
- Host: GitHub
- URL: https://github.com/0000xffff/4chan-dl
- Owner: 0000xFFFF
- Created: 2024-09-10T14:30:52.000Z (4 months ago)
- Default Branch: master
- Last Pushed: 2024-09-16T14:40:45.000Z (4 months ago)
- Last Synced: 2024-09-17T12:02:25.832Z (4 months ago)
- Topics: 4chan, 4chan-downloader, beautifulsoup4, colorama, download, downloader, python, python3, scape, scraper
- Language: Python
- Homepage:
- Size: 32.2 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# 4chan-dl
[![Python 3.12.5](https://img.shields.io/badge/Python-3.12.5-yellow.svg)](http://www.python.org/download/)
Download media files (.jpg, .jpeg, .webm, ...) from 4chan.org with their posted filenames.
If the thread has multiple files with the same posted filename the files will be renamed (downloaded with a different name).## Requirements - pip
* requests
* beautifulsoup4
* colorama## Running
```
./4chan-dl -d downloads "https://boards.4chan.org/XX/thread/XXXXXXX"
```## Recommend way to run
Just downloads new files in thread.
```
./4chan-dl -g -d downloads "https://boards.4chan.org/XX/thread/XXXXXXX"
```## Usage
###### ./4chan-dl -h
```
usage: 4chan-dl [-h] [-d directory] [-s] [-r] [-p] [-c] [-f file.txt] [-t num_threads] [-v] [-g] urlDownload media files from 4chan.org with their posted filenames
positional arguments:
url 4chan thread urloptions:
-h, --help show this help message and exit
-d directory, --directory directory
directory to save files to
-s, --skip if file exists with the same filename skip it (default: overwrite)
-r, --recursive_skip recursively search for filenames to skip
-p, --postid download files with post's id rather than posted filename
-c, --combine download files with post's id + posted name (_.)
-f file.txt, --filter file.txt
urls to ignore stored in file
-t num_threads, --threads num_threads
number of download worker threads (default: 1)
-v, --verbose be more verbose
-g, --goodargs -crvt 5
```