Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/alash3al/scrapyr
a simple & tiny scrapy clustering solution, considered a drop-in replacement for scrapyd
https://github.com/alash3al/scrapyr
clustering golang python scrapy scrapyd-server
Last synced: 3 months ago
JSON representation
a simple & tiny scrapy clustering solution, considered a drop-in replacement for scrapyd
- Host: GitHub
- URL: https://github.com/alash3al/scrapyr
- Owner: alash3al
- License: apache-2.0
- Created: 2019-10-26T18:54:09.000Z (about 5 years ago)
- Default Branch: master
- Last Pushed: 2023-08-30T14:01:41.000Z (about 1 year ago)
- Last Synced: 2024-07-09T00:07:35.447Z (4 months ago)
- Topics: clustering, golang, python, scrapy, scrapyd-server
- Language: Go
- Homepage:
- Size: 11 MB
- Stars: 50
- Watchers: 4
- Forks: 6
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
scrapyr
========
> a very simple scrapy orchestrator engine that could be distributed among multiple machines to build a scrapy cluster, under-the-hood it uses redis as a task broker, it may be changed in the future to support pluggable brokers, but for now it does the job.Features
========
- uses simple configuration language for humans called `hcl`.
- multiple types of queues/workers (`lifo`, `fifo`, `weight`).
- you can define multiple workers with different type of queues.
- abbility to override the content of the `settings.py` of the scrapy project from the same configuration file.
- a `status` endpoint helps you to understand what is going on.
- a `enqueue` endpoint lets you push a job into the specified queue, as well the abbility to execute the job instantly and returns the extracted items.API Examples
============- Getting the status of the cluster
```bash
curl --request GET \
--url http://localhost:1993/status \
--header 'content-type: application/json'
```- Push a task into the queue utilizing the worker `worker1` which is pre-defined in the `scrapyr.hcl`
```bash
# worker -> the worker name (predefined in scrapyr.hcl)
# spider -> the scrapy spider to be executed
# max_execution_time -> the max duration the scrapy process should take
# args -> a key value strings will be translated to `-a key=value ...` for each key-value pair.
# weight -> the weight of the task itself (in case of weight based workers defined in the scrapyr.hcl)
curl --request POST \
--url http://localhost:1993/enqueue \
--header 'content-type: application/json' \
--data '{
"worker": "worker1",
"spider": "spider_name",
"max_execution_time": "20s",
"args": {
"scrapy_arg_name": "scrapy_arg_value"
},
"weight": 10
}'
```Configurations
===============
> here is an example of the `scraply.hcl````hcl
# the webserver listening address
listen_addr = ":1993"# redis connection string
# it uses url-style connection string
# example: redis://username:password@hostname:port/database_number
redis_dsn = "redis://127.0.0.1:6378/1"scrapy {
project_dir = "${HOME}/playground/tstscrapy"python_bin = "/usr/bin/python3"
items_dir = "${PWD}/data"
}worker worker1 {
// which method you want the worker to use
// lifo: last in, first out
// fifo: first in, first out
// weight: max weight, first out
use = "weight"// max processes to be executed in the same time for this workers
max_procs = 5
}# sometimes you may need to control the `ProjectNAme/ProjectName/settings.py` file from here
# so we did this special key which mounts the contents of it into `settings.py` file.
settings_py = < you can download the latest binary build from the [releases](https://github.com/alash3al/scrapyr/releases) page or by using [docker](https://hub.docker.com/r/alash3al/scrapyr) directly.Contributing
=============
- Fork the repo
- Create a feature branch
- Push your changes
- Create a pull requestLicense
========
Apache License v2.0Author
=======
- Mohamed Al Ashaal
- Software Engineer
- [email protected]