{"id":23071950,"url":"https://github.com/scrapingant/alibaba_scraper","last_synced_at":"2025-08-15T14:32:39.556Z","repository":{"id":54474275,"uuid":"260907354","full_name":"ScrapingAnt/alibaba_scraper","owner":"ScrapingAnt","description":"Alibaba scraper with using of rotating proxies and headless Chrome from ScrapingAnt","archived":false,"fork":false,"pushed_at":"2023-07-05T20:48:37.000Z","size":155,"stargazers_count":10,"open_issues_count":1,"forks_count":3,"subscribers_count":4,"default_branch":"master","last_synced_at":"2023-07-06T08:06:38.389Z","etag":null,"topics":["alibaba-scraper","datamining","price-scraper","price-scraping","python","scraper","scraping","scraping-api","scraping-data","scraping-tool","scraping-web","scraping-websites","web-crawler","web-crawler-python","web-crawling"],"latest_commit_sha":null,"homepage":"https://scrapingant.com","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ScrapingAnt.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-05-03T12:13:51.000Z","updated_at":"2023-07-05T20:48:38.000Z","dependencies_parsed_at":"2022-08-13T17:00:17.741Z","dependency_job_id":null,"html_url":"https://github.com/ScrapingAnt/alibaba_scraper","commit_stats":null,"previous_names":[],"tags_count":null,"template":null,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ScrapingAnt%2Falibaba_scraper","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ScrapingAnt%2Falibaba_scraper/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ScrapingAnt%2Falibaba_scraper/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ScrapingAnt%2Falibaba_scraper/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ScrapingAnt","download_url":"https://codeload.github.com/ScrapingAnt/alibaba_scraper/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":229920847,"owners_count":18144863,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["alibaba-scraper","datamining","price-scraper","price-scraping","python","scraper","scraping","scraping-api","scraping-data","scraping-tool","scraping-web","scraping-websites","web-crawler","web-crawler-python","web-crawling"],"created_at":"2024-12-16T07:18:19.336Z","updated_at":"2024-12-16T07:18:19.802Z","avatar_url":"https://github.com/ScrapingAnt.png","language":"Python","readme":"# Alibaba parser using scrapingant.com\nThis project shows how to use \u003ca href=\"https://scrapingant.com\"\u003eScrapingAnt\u003c/a\u003e scraping service to load public data from alibaba.\n\nScrapingAnt takes away all the messy work necessary to set up a browser and proxies for crawling. So you can just focus on your data.\n## Usage\nTo run this code you need RapidApi key. Just go to \u003ca href=\"https://rapidapi.com/okami4kak/api/scrapingant\"\u003eScrapingAnt page on Rapidapi\u003c/a\u003e, and click \"Subscribe to Test\" button. After that you have to select plan(there is a free one including 100 requests). After that you can find you api key in \"X-RapidAPI-Key\" field on \u003ca href=\"https://rapidapi.com/okami4kak/api/scrapingant/endpoints\"\u003eendpoints\u003c/a\u003e page.\n#### With Docker\n```shell script\ndocker build -t alibaba_scraper . \u0026\u0026 docker run -it -v ${PWD}/data:/app/data alibaba_scraper adidas --rapidapi_key \u003cRAPID_API_KEY\u003e\n```\n\n#### Without Docker\nThis code was written for python 3.7+\n```shell script\ngit clone https://github.com/ScrapingAnt/alibaba_scraper.git\ncd alibaba_scraper\npython3 -m venv .env\n.env/bin/pip install -r requirements.txt\n.env/bin/python main.py --help\n.env/bin/python main.py adidas --rapidapi_key \u003cRAPID_API_KEY\u003e\n```\n#### Available params\n```\n.env/bin/python python main.py --help\n\nUsage: main.py [OPTIONS] SEARCH_STRING\n\nOptions:\n  --rapidapi_key TEXT             Api key from https://rapidapi.com/okami4kak/api/scrapingant  [required]\n  --pages INTEGER                 Number of search pages to parse\n  --country [ae|br|cn|de|es|fr|gb|hk|in|it|il|jp|nl|ru|sa|us]\n                                  Country of proxies location\n  --help                          Show this message and exit.\n```\n#### Sample output:\nOutput is saved to data/ directory in csv format.\n![](result_example.png)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fscrapingant%2Falibaba_scraper","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fscrapingant%2Falibaba_scraper","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fscrapingant%2Falibaba_scraper/lists"}