{"id":22907330,"url":"https://github.com/rakashash/urlextractor","last_synced_at":"2025-04-01T09:08:11.038Z","repository":{"id":267813441,"uuid":"902431113","full_name":"RaKAsHASH/urlExtractor","owner":"RaKAsHASH","description":"Dynamic URL Crawler automates web scraping to extract product URLs efficiently, storing results in JSON for further analysis.","archived":false,"fork":false,"pushed_at":"2024-12-12T15:21:14.000Z","size":7,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-04-01T09:08:07.184Z","etag":null,"topics":["asyncio","playwright","playwright-python","python3"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/RaKAsHASH.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-12-12T14:54:18.000Z","updated_at":"2024-12-12T15:24:53.000Z","dependencies_parsed_at":"2024-12-12T16:26:22.966Z","dependency_job_id":"e8683a86-ce16-4c51-be04-8ce6b1968b43","html_url":"https://github.com/RaKAsHASH/urlExtractor","commit_stats":null,"previous_names":["rakashash/urlextractor"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/RaKAsHASH%2FurlExtractor","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/RaKAsHASH%2FurlExtractor/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/RaKAsHASH%2FurlExtractor/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/RaKAsHASH%2FurlExtractor/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/RaKAsHASH","download_url":"https://codeload.github.com/RaKAsHASH/urlExtractor/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":246612485,"owners_count":20805355,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["asyncio","playwright","playwright-python","python3"],"created_at":"2024-12-14T03:14:33.861Z","updated_at":"2025-04-01T09:08:11.020Z","avatar_url":"https://github.com/RaKAsHASH.png","language":"Python","readme":"# Dynamic URL Crawler\n\n## Project Description\n\nDynamic URL Crawler is a Python-based asynchronous web scraping tool built using Playwright. This tool is designed to scrape product-related links dynamically from a given list of URLs. It effectively handles infinite scrolling pages and extracts URLs matching specific patterns. The scraped data is stored in a structured JSON format for further use.\n\n## Features\n\n- **Asynchronous Crawling**: Utilizes asyncio and Playwright for high-performance, non-blocking web scraping.\n- **Dynamic Scrolling**: Automatically scrolls to the bottom of pages to ensure complete data extraction from infinite scrolling websites.\n- **Customizable URL Patterns**: Scrapes links matching specific product-related patterns (e.g., `/product/`, `/dp/`, `/shop/`, etc.).\n- **JSON Storage**: Saves extracted product links in a `product_urls.json` file.\n- **Scalable Architecture**: Handles multiple URLs concurrently for efficient scraping.\n\n## Installation\n\n### Prerequisites\n1. **Python**: Ensure Python 3 or later is installed.\n\n### Steps to Install\n1. Clone the repository:\n   ```bash\n   git clone https://github.com/RaKAsHASH/urlExtractor.git\n   cd urlExtractor\n   ```\n2. SetUp a virtual Environment\n   ```bash\n   python3 -m venv \u003cyour-venv-name\u003e \n   ```\n3. Activate your virtual Environment\n   ```bash\n   source ./venv/bin/activate \n   ```\n4. Install Dependencies \u003c/br\u003e\n   **Playwright**: Install and set up Playwright with the following commands:\n   ```bash\n   pip install playwright\n   playwright install\n   ```\n\n## Usage\n\n1. Add the target URLs to the `url` list in the script:\n   ```python\n   url = [\"https://www.amazon.in/s?k=i+phone+15+pro\", \"https://www.flipkart.com/\", ...]\n   ```\n2. Run the script:\n   ```bash\n   python urlExtractor.py\n   ```\n3. View the results in the `product_urls.json` file.\n\n## Code Structure\n\n- **`DynamicUrlCrawler` Class**: \n  - Manages the crawling process and data extraction.\n- **`start_crawl` Method**: \n  - Initiates the browser, distributes tasks, and manages concurrent URL processing.\n- **`scrape_page` Method**: \n  - Handles infinite scrolling and extracts product links.\n- **`save_results` Method**: \n  - Saves extracted links to a JSON file.\n\n## Example Output\n\nAn example `product_urls.json` file:\n```json\n{\n  \"https://www.amazon.in/s?k=i+phone+15+pro\": [\n    \"/product/iphone-15-pro\",\n    \"/dp/B0C7XYZ\"\n  ],\n  \"https://www.flipkart.com/\": [\n    \"/item/iphone-case\",\n    \"/p/smartphone\"\n  ]\n}\n```\n\n## Dependencies\n\n- Python 3\n- [Playwright](https://playwright.dev)\n\n## Limitations\n\n- Limited to scraping product-related links based on predefined patterns.\n- Unable to handle Pagination Change to get product Links.\n- Static wait time of 2sec for page loading.\n- Requires stable internet connection and proper handling of rate limits.\n\n\n---\n\nDeveloped with 💻 and 🧠 by Harjeet\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frakashash%2Furlextractor","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Frakashash%2Furlextractor","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frakashash%2Furlextractor/lists"}