{"id":15905994,"url":"https://github.com/lucacappelletti94/tinycrawler","last_synced_at":"2025-03-21T11:32:22.249Z","repository":{"id":57475726,"uuid":"125920882","full_name":"LucaCappelletti94/tinycrawler","owner":"LucaCappelletti94","description":"Web crawler that uses multiprocessing and arbitrarily many proxies to traverse and download websites","archived":false,"fork":false,"pushed_at":"2023-04-02T14:32:07.000Z","size":8255,"stargazers_count":3,"open_issues_count":0,"forks_count":0,"subscribers_count":2,"default_branch":"master","last_synced_at":"2024-10-13T13:51:52.355Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/LucaCappelletti94.png","metadata":{"files":{"readme":"README.rst","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-03-19T21:11:03.000Z","updated_at":"2023-04-09T18:35:57.000Z","dependencies_parsed_at":"2022-09-07T17:13:09.331Z","dependency_job_id":null,"html_url":"https://github.com/LucaCappelletti94/tinycrawler","commit_stats":null,"previous_names":[],"tags_count":8,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LucaCappelletti94%2Ftinycrawler","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LucaCappelletti94%2Ftinycrawler/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LucaCappelletti94%2Ftinycrawler/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LucaCappelletti94%2Ftinycrawler/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/LucaCappelletti94","download_url":"https://codeload.github.com/LucaCappelletti94/tinycrawler/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":221814772,"owners_count":16885059,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-10-06T13:20:31.194Z","updated_at":"2024-10-28T10:02:51.330Z","avatar_url":"https://github.com/LucaCappelletti94.png","language":"Python","readme":".. role:: py(code)\n   :language: python\n\n.. role:: json(code)\n   :language: json\n\n\nTinyCrawler\n====================\n\n|travis| |sonar_quality| |sonar_maintainability| |sonar_coverage| |code_climate_maintainability| |pip|\n\nAn highly customizable crawler that uses multiprocessing and proxies to download one or more websites following a given filter, search and save functions.\n\n**REMEMBER THAT DDOS IS ILLEGAL. DO NOT USE THIS SOFTWARE FOR ILLEGAL PURPOSE.**\n\nInstalling TinyCrawler\n------------------------\n\n.. code:: shell\n\n    pip install tinycrawler\n\nTODOs for next version\n------------------------\n\n- Test proxies while normally downloading. - DONE\n- Parallelize different domains downloads. - DONE\n- Add dropping for high failure proxy and add parameters for such rate - DONE, yet to be tested\n- Make failure rate domain specific with also a global mean.\n- Enable failure rate also for local.\n- Check robots txt also before downloading urls\n- Reduce robots timeout defaults to 2 hours\n- Change to exponential the wait timeout for the download attempts\n- To define a binary file, check if in the first 1000 characters you find a number greater than 3/5 of zeros\n- Add useragent\n- Stop downloads when all proxies are dead.\n- Try to use `active_children` as a way to test for active processes\n- Add test for proxies\n- Add way to save progress automatically every given timeout. \n- Add way to automatically save tested proxies.\n\nPreview (Test case)\n---------------------\nThis is the preview of the console when running the `test_base.py`_.\n\n|preview|\n\nBasic usage example\n---------------------\n\n.. code:: python\n\n    from tinycrawler import TinyCrawler, Log\n    from bs4 import BeautifulSoup\n\n\n    def url_validator(url: str, logger: Log)-\u003ebool:\n        \"\"\"Return a boolean representing if the crawler should parse given url.\"\"\"\n        return url.startswith(\"http://interestingurl.com\")\n\n\n    def file_parser(url:str, soup:BeautifulSoup, logger: Log):\n        \"\"\"Parse and elaborate given soup.\"\"\"\n        # soup parsing...\n        pass\n\n    TinyCrawler(\n        file_parser=file_parser,\n        url_validator=url_validator\n    ).run(\"https://www.example.com/\")\n\nExample loading proxies\n---------------------\n\n.. code:: python\n\n    from tinycrawler import TinyCrawler, Log\n    from bs4 import BeautifulSoup\n\n\n    def url_validator(url: str, logger: Log)-\u003ebool:\n        \"\"\"Return a boolean representing if the crawler should parse given url.\"\"\"\n        return url.startswith(\"http://interestingurl.com\")\n\n\n    def file_parser(url:str, soup:BeautifulSoup, logger: Log):\n        \"\"\"Parse and elaborate given soup.\"\"\"\n        # soup parsing...\n        pass\n\n    crawler = TinyCrawler(\n        file_parser=file_parser,\n        url_validator=url_validator\n    )\n    crawler.load_proxies(\"http://myexampletestserver.com\", \"path/to/proxies.json\")\n    crawler.run(\"https://www.example.com/\")\n\n\n\nProxies are expected to be in the following format:\n\n.. code:: python\n\n    [\n      {\n        \"ip\": \"89.236.17.108\",\n        \"port\": 3128,\n        \"type\": [\n          \"https\",\n          \"http\"\n        ]\n      },\n      {\n        \"ip\": \"128.199.141.151\",\n        \"port\": 3128,\n        \"type\": [\n          \"https\",\n          \"http\"\n        ]\n      }\n    ]\n\n\nLicense\n--------------\nThe software is released under the MIT license.\n\n.. _`test_base.py`: https://github.com/LucaCappelletti94/tinycrawler/blob/master/tests/test_base.py\n\n.. |preview| image:: https://github.com/LucaCappelletti94/tinycrawler/blob/master/preview.png?raw=true\n\n.. |travis| image:: https://travis-ci.org/LucaCappelletti94/tinycrawler.png\n   :target: https://travis-ci.org/LucaCappelletti94/tinycrawler\n\n.. |sonar_quality| image:: https://sonarcloud.io/api/project_badges/measure?project=tinycrawler.lucacappelletti\u0026metric=alert_status\n    :target: https://sonarcloud.io/dashboard/index/tinycrawler.lucacappelletti\n\n.. |sonar_maintainability| image:: https://sonarcloud.io/api/project_badges/measure?project=tinycrawler.lucacappelletti\u0026metric=sqale_rating\n    :target: https://sonarcloud.io/dashboard/index/tinycrawler.lucacappelletti\n\n.. |sonar_coverage| image:: https://sonarcloud.io/api/project_badges/measure?project=tinycrawler.lucacappelletti\u0026metric=coverage\n    :target: https://sonarcloud.io/dashboard/index/tinycrawler.lucacappelletti\n\n.. |code_climate_maintainability| image:: https://api.codeclimate.com/v1/badges/25fb7c6119e188dbd12c/maintainability\n   :target: https://codeclimate.com/github/LucaCappelletti94/tinycrawler/maintainability\n   :alt: Maintainability\n\n.. |pip| image:: https://badge.fury.io/py/tinycrawler.svg\n    :target: https://badge.fury.io/py/tinycrawler\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flucacappelletti94%2Ftinycrawler","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Flucacappelletti94%2Ftinycrawler","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flucacappelletti94%2Ftinycrawler/lists"}