{"id":13707346,"url":"https://github.com/unclecode/crawl4ai","last_synced_at":"2026-04-01T16:40:34.168Z","repository":{"id":239022217,"uuid":"798201435","full_name":"unclecode/crawl4ai","owner":"unclecode","description":"🚀🤖 Crawl4AI: Open-source LLM Friendly Web Crawler \u0026 Scraper. Don't be shy, join here: https://discord.gg/jP8KfhDhyN","archived":false,"fork":false,"pushed_at":"2026-03-25T01:51:21.000Z","size":149230,"stargazers_count":62547,"open_issues_count":36,"forks_count":6392,"subscribers_count":326,"default_branch":"main","last_synced_at":"2026-03-25T12:56:41.052Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://crawl4ai.com","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/unclecode.png","metadata":{"files":{"readme":"README-first.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":"ROADMAP.md","authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null},"funding":{"github":"unclecode"}},"created_at":"2024-05-09T09:48:50.000Z","updated_at":"2026-03-25T12:55:48.000Z","dependencies_parsed_at":"2025-11-28T02:02:58.048Z","dependency_job_id":null,"html_url":"https://github.com/unclecode/crawl4ai","commit_stats":null,"previous_names":["unclecode/crawl4ai"],"tags_count":44,"template":false,"template_full_name":null,"purl":"pkg:github/unclecode/crawl4ai","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/unclecode%2Fcrawl4ai","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/unclecode%2Fcrawl4ai/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/unclecode%2Fcrawl4ai/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/unclecode%2Fcrawl4ai/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/unclecode","download_url":"https://codeload.github.com/unclecode/crawl4ai/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/unclecode%2Fcrawl4ai/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31290538,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-01T13:12:26.723Z","status":"ssl_error","status_checked_at":"2026-04-01T13:12:25.102Z","response_time":53,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-02T22:01:28.093Z","updated_at":"2026-04-01T16:40:34.147Z","avatar_url":"https://github.com/unclecode.png","language":"Python","readme":"# 🚀🤖 Crawl4AI: Open-source LLM Friendly Web Crawler \u0026 Scraper.\n\n\u003cdiv align=\"center\"\u003e\n\n\u003ca href=\"https://trendshift.io/repositories/11716\" target=\"_blank\"\u003e\u003cimg src=\"https://trendshift.io/api/badge/repositories/11716\" alt=\"unclecode%2Fcrawl4ai | Trendshift\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"/\u003e\u003c/a\u003e\n\n[![GitHub Stars](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/stargazers)\n[![GitHub Forks](https://img.shields.io/github/forks/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/network/members)\n\n[![PyPI version](https://badge.fury.io/py/crawl4ai.svg)](https://badge.fury.io/py/crawl4ai)\n[![Python Version](https://img.shields.io/pypi/pyversions/crawl4ai)](https://pypi.org/project/crawl4ai/)\n[![Downloads](https://static.pepy.tech/badge/crawl4ai/month)](https://pepy.tech/project/crawl4ai)\n[![GitHub Sponsors](https://img.shields.io/github/sponsors/unclecode?style=flat\u0026logo=GitHub-Sponsors\u0026label=Sponsors\u0026color=pink)](https://github.com/sponsors/unclecode)\n\n\u003cp align=\"center\"\u003e\n    \u003ca href=\"https://x.com/crawl4ai\"\u003e\n      \u003cimg src=\"https://img.shields.io/badge/Follow%20on%20X-000000?style=for-the-badge\u0026logo=x\u0026logoColor=white\" alt=\"Follow on X\" /\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://www.linkedin.com/company/crawl4ai\"\u003e\n      \u003cimg src=\"https://img.shields.io/badge/Follow%20on%20LinkedIn-0077B5?style=for-the-badge\u0026logo=linkedin\u0026logoColor=white\" alt=\"Follow on LinkedIn\" /\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://discord.gg/jP8KfhDhyN\"\u003e\n      \u003cimg src=\"https://img.shields.io/badge/Join%20our%20Discord-5865F2?style=for-the-badge\u0026logo=discord\u0026logoColor=white\" alt=\"Join our Discord\" /\u003e\n    \u003c/a\u003e\n  \u003c/p\u003e\n\u003c/div\u003e\n\nCrawl4AI is the #1 trending GitHub repository, actively maintained by a vibrant community. It delivers blazing-fast, AI-ready web crawling tailored for LLMs, AI agents, and data pipelines. Open source, flexible, and built for real-time performance, Crawl4AI empowers developers with unmatched speed, precision, and deployment ease.  \n\n[✨ Check out latest update v0.7.0](#-recent-updates)\n\n🎉 **Version 0.7.0 is now available!** The Adaptive Intelligence Update introduces groundbreaking features: Adaptive Crawling that learns website patterns, Virtual Scroll support for infinite pages, intelligent Link Preview with 3-layer scoring, Async URL Seeder for massive discovery, and significant performance improvements. [Read the release notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.0.md)\n\n\u003cdetails\u003e\n\u003csummary\u003e🤓 \u003cstrong\u003eMy Personal Story\u003c/strong\u003e\u003c/summary\u003e\n\nMy journey with computers started in childhood when my dad, a computer scientist, introduced me to an Amstrad computer. Those early days sparked a fascination with technology, leading me to pursue computer science and specialize in NLP during my postgraduate studies. It was during this time that I first delved into web crawling, building tools to help researchers organize papers and extract information from publications a challenging yet rewarding experience that honed my skills in data extraction.\n\nFast forward to 2023, I was working on a tool for a project and needed a crawler to convert a webpage into markdown. While exploring solutions, I found one that claimed to be open-source but required creating an account and generating an API token. Worse, it turned out to be a SaaS model charging $16, and its quality didn’t meet my standards. Frustrated, I realized this was a deeper problem. That frustration turned into turbo anger mode, and I decided to build my own solution. In just a few days, I created Crawl4AI. To my surprise, it went viral, earning thousands of GitHub stars and resonating with a global community.\n\nI made Crawl4AI open-source for two reasons. First, it’s my way of giving back to the open-source community that has supported me throughout my career. Second, I believe data should be accessible to everyone, not locked behind paywalls or monopolized by a few. Open access to data lays the foundation for the democratization of AI, a vision where individuals can train their own models and take ownership of their information. This library is the first step in a larger journey to create the best open-source data extraction and generation tool the world has ever seen, built collaboratively by a passionate community.\n\nThank you to everyone who has supported this project, used it, and shared feedback. Your encouragement motivates me to dream even bigger. Join us, file issues, submit PRs, or spread the word. Together, we can build a tool that truly empowers people to access their own data and reshape the future of AI.\n\u003c/details\u003e\n\n## 🧐 Why Crawl4AI?\n\n1. **Built for LLMs**: Creates smart, concise Markdown optimized for RAG and fine-tuning applications.  \n2. **Lightning Fast**: Delivers results faster with real-time, cost-efficient performance.  \n3. **Flexible Browser Control**: Offers session management, proxies, and custom hooks for seamless data access.  \n4. **Heuristic Intelligence**: Uses advanced algorithms for efficient extraction, reducing reliance on costly models.  \n5. **Open Source \u0026 Deployable**: Fully open-source with no API keys—ready for Docker and cloud integration.  \n6. **Thriving Community**: Actively maintained by a vibrant community and the #1 trending GitHub repository.\n\n## 🚀 Quick Start \n\n1. Install Crawl4AI:\n```bash\n# Install the package\npip install -U crawl4ai\n\n# For pre release versions\npip install crawl4ai --pre\n\n# Run post-installation setup\ncrawl4ai-setup\n\n# Verify your installation\ncrawl4ai-doctor\n```\n\nIf you encounter any browser-related issues, you can install them manually:\n```bash\npython -m playwright install --with-deps chromium\n```\n\n2. Run a simple web crawl with Python:\n```python\nimport asyncio\nfrom crawl4ai import *\n\nasync def main():\n    async with AsyncWebCrawler() as crawler:\n        result = await crawler.arun(\n            url=\"https://www.nbcnews.com/business\",\n        )\n        print(result.markdown)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n3. Or use the new command-line interface:\n```bash\n# Basic crawl with markdown output\ncrwl https://www.nbcnews.com/business -o markdown\n\n# Deep crawl with BFS strategy, max 10 pages\ncrwl https://docs.crawl4ai.com --deep-crawl bfs --max-pages 10\n\n# Use LLM extraction with a specific question\ncrwl https://www.example.com/products -q \"Extract all product prices\"\n```\n\n## ✨ Features \n\n\u003cdetails\u003e\n\u003csummary\u003e📝 \u003cstrong\u003eMarkdown Generation\u003c/strong\u003e\u003c/summary\u003e\n\n- 🧹 **Clean Markdown**: Generates clean, structured Markdown with accurate formatting.\n- 🎯 **Fit Markdown**: Heuristic-based filtering to remove noise and irrelevant parts for AI-friendly processing.\n- 🔗 **Citations and References**: Converts page links into a numbered reference list with clean citations.\n- 🛠️ **Custom Strategies**: Users can create their own Markdown generation strategies tailored to specific needs.\n- 📚 **BM25 Algorithm**: Employs BM25-based filtering for extracting core information and removing irrelevant content. \n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e📊 \u003cstrong\u003eStructured Data Extraction\u003c/strong\u003e\u003c/summary\u003e\n\n- 🤖 **LLM-Driven Extraction**: Supports all LLMs (open-source and proprietary) for structured data extraction.\n- 🧱 **Chunking Strategies**: Implements chunking (topic-based, regex, sentence-level) for targeted content processing.\n- 🌌 **Cosine Similarity**: Find relevant content chunks based on user queries for semantic extraction.\n- 🔎 **CSS-Based Extraction**: Fast schema-based data extraction using XPath and CSS selectors.\n- 🔧 **Schema Definition**: Define custom schemas for extracting structured JSON from repetitive patterns.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e🌐 \u003cstrong\u003eBrowser Integration\u003c/strong\u003e\u003c/summary\u003e\n\n- 🖥️ **Managed Browser**: Use user-owned browsers with full control, avoiding bot detection.\n- 🔄 **Remote Browser Control**: Connect to Chrome Developer Tools Protocol for remote, large-scale data extraction.\n- 👤 **Browser Profiler**: Create and manage persistent profiles with saved authentication states, cookies, and settings.\n- 🔒 **Session Management**: Preserve browser states and reuse them for multi-step crawling.\n- 🧩 **Proxy Support**: Seamlessly connect to proxies with authentication for secure access.\n- ⚙️ **Full Browser Control**: Modify headers, cookies, user agents, and more for tailored crawling setups.\n- 🌍 **Multi-Browser Support**: Compatible with Chromium, Firefox, and WebKit.\n- 📐 **Dynamic Viewport Adjustment**: Automatically adjusts the browser viewport to match page content, ensuring complete rendering and capturing of all elements.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e🔎 \u003cstrong\u003eCrawling \u0026 Scraping\u003c/strong\u003e\u003c/summary\u003e\n\n- 🖼️ **Media Support**: Extract images, audio, videos, and responsive image formats like `srcset` and `picture`.\n- 🚀 **Dynamic Crawling**: Execute JS and wait for async or sync for dynamic content extraction.\n- 📸 **Screenshots**: Capture page screenshots during crawling for debugging or analysis.\n- 📂 **Raw Data Crawling**: Directly process raw HTML (`raw:`) or local files (`file://`).\n- 🔗 **Comprehensive Link Extraction**: Extracts internal, external links, and embedded iframe content.\n- 🛠️ **Customizable Hooks**: Define hooks at every step to customize crawling behavior.\n- 💾 **Caching**: Cache data for improved speed and to avoid redundant fetches.\n- 📄 **Metadata Extraction**: Retrieve structured metadata from web pages.\n- 📡 **IFrame Content Extraction**: Seamless extraction from embedded iframe content.\n- 🕵️ **Lazy Load Handling**: Waits for images to fully load, ensuring no content is missed due to lazy loading.\n- 🔄 **Full-Page Scanning**: Simulates scrolling to load and capture all dynamic content, perfect for infinite scroll pages.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e🚀 \u003cstrong\u003eDeployment\u003c/strong\u003e\u003c/summary\u003e\n\n- 🐳 **Dockerized Setup**: Optimized Docker image with FastAPI server for easy deployment.\n- 🔑 **Secure Authentication**: Built-in JWT token authentication for API security.\n- 🔄 **API Gateway**: One-click deployment with secure token authentication for API-based workflows.\n- 🌐 **Scalable Architecture**: Designed for mass-scale production and optimized server performance.\n- ☁️ **Cloud Deployment**: Ready-to-deploy configurations for major cloud platforms.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e🎯 \u003cstrong\u003eAdditional Features\u003c/strong\u003e\u003c/summary\u003e\n\n- 🕶️ **Stealth Mode**: Avoid bot detection by mimicking real users.\n- 🏷️ **Tag-Based Content Extraction**: Refine crawling based on custom tags, headers, or metadata.\n- 🔗 **Link Analysis**: Extract and analyze all links for detailed data exploration.\n- 🛡️ **Error Handling**: Robust error management for seamless execution.\n- 🔐 **CORS \u0026 Static Serving**: Supports filesystem-based caching and cross-origin requests.\n- 📖 **Clear Documentation**: Simplified and updated guides for onboarding and advanced usage.\n- 🙌 **Community Recognition**: Acknowledges contributors and pull requests for transparency.\n\n\u003c/details\u003e\n\n## Try it Now!\n\n✨ Play around with this [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1SgRPrByQLzjRfwoRNq1wSGE9nYY_EE8C?usp=sharing)\n\n✨ Visit our [Documentation Website](https://docs.crawl4ai.com/)\n\n## Installation 🛠️\n\nCrawl4AI offers flexible installation options to suit various use cases. You can install it as a Python package or use Docker.\n\n\u003cdetails\u003e\n\u003csummary\u003e🐍 \u003cstrong\u003eUsing pip\u003c/strong\u003e\u003c/summary\u003e\n\nChoose the installation option that best fits your needs:\n\n### Basic Installation\n\nFor basic web crawling and scraping tasks:\n\n```bash\npip install crawl4ai\ncrawl4ai-setup # Setup the browser\n```\n\nBy default, this will install the asynchronous version of Crawl4AI, using Playwright for web crawling.\n\n👉 **Note**: When you install Crawl4AI, the `crawl4ai-setup` should automatically install and set up Playwright. However, if you encounter any Playwright-related errors, you can manually install it using one of these methods:\n\n1. Through the command line:\n\n   ```bash\n   playwright install\n   ```\n\n2. If the above doesn't work, try this more specific command:\n\n   ```bash\n   python -m playwright install chromium\n   ```\n\nThis second method has proven to be more reliable in some cases.\n\n---\n\n### Installation with Synchronous Version\n\nThe sync version is deprecated and will be removed in future versions. If you need the synchronous version using Selenium:\n\n```bash\npip install crawl4ai[sync]\n```\n\n---\n\n### Development Installation\n\nFor contributors who plan to modify the source code:\n\n```bash\ngit clone https://github.com/unclecode/crawl4ai.git\ncd crawl4ai\npip install -e .                    # Basic installation in editable mode\n```\n\nInstall optional features:\n\n```bash\npip install -e \".[torch]\"           # With PyTorch features\npip install -e \".[transformer]\"     # With Transformer features\npip install -e \".[cosine]\"          # With cosine similarity features\npip install -e \".[sync]\"            # With synchronous crawling (Selenium)\npip install -e \".[all]\"             # Install all optional features\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e🐳 \u003cstrong\u003eDocker Deployment\u003c/strong\u003e\u003c/summary\u003e\n\n\u003e 🚀 **Now Available!** Our completely redesigned Docker implementation is here! This new solution makes deployment more efficient and seamless than ever.\n\n### New Docker Features\n\nThe new Docker implementation includes:\n- **Browser pooling** with page pre-warming for faster response times\n- **Interactive playground** to test and generate request code\n- **MCP integration** for direct connection to AI tools like Claude Code\n- **Comprehensive API endpoints** including HTML extraction, screenshots, PDF generation, and JavaScript execution\n- **Multi-architecture support** with automatic detection (AMD64/ARM64)\n- **Optimized resources** with improved memory management\n\n### Getting Started\n\n```bash\n# Pull and run the latest release candidate\ndocker pull unclecode/crawl4ai:0.7.0\ndocker run -d -p 11235:11235 --name crawl4ai --shm-size=1g unclecode/crawl4ai:0.7.0\n\n# Visit the playground at http://localhost:11235/playground\n```\n\nFor complete documentation, see our [Docker Deployment Guide](https://docs.crawl4ai.com/core/docker-deployment/).\n\n\u003c/details\u003e\n\n---\n\n### Quick Test\n\nRun a quick test (works for both Docker options):\n\n```python\nimport requests\n\n# Submit a crawl job\nresponse = requests.post(\n    \"http://localhost:11235/crawl\",\n    json={\"urls\": [\"https://example.com\"], \"priority\": 10}\n)\nif response.status_code == 200:\n    print(\"Crawl job submitted successfully.\")\n    \nif \"results\" in response.json():\n    results = response.json()[\"results\"]\n    print(\"Crawl job completed. Results:\")\n    for result in results:\n        print(result)\nelse:\n    task_id = response.json()[\"task_id\"]\n    print(f\"Crawl job submitted. Task ID:: {task_id}\")\n    result = requests.get(f\"http://localhost:11235/task/{task_id}\")\n```\n\nFor more examples, see our [Docker Examples](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/docker_example.py). For advanced configuration, environment variables, and usage examples, see our [Docker Deployment Guide](https://docs.crawl4ai.com/basic/docker-deployment/).\n\n\u003c/details\u003e\n\n\n## 🔬 Advanced Usage Examples 🔬\n\nYou can check the project structure in the directory [docs/examples](https://github.com/unclecode/crawl4ai/tree/main/docs/examples). Over there, you can find a variety of examples; here, some popular examples are shared.\n\n\u003cdetails\u003e\n\u003csummary\u003e📝 \u003cstrong\u003eHeuristic Markdown Generation with Clean and Fit Markdown\u003c/strong\u003e\u003c/summary\u003e\n\n```python\nimport asyncio\nfrom crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode\nfrom crawl4ai.content_filter_strategy import PruningContentFilter, BM25ContentFilter\nfrom crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator\n\nasync def main():\n    browser_config = BrowserConfig(\n        headless=True,  \n        verbose=True,\n    )\n    run_config = CrawlerRunConfig(\n        cache_mode=CacheMode.ENABLED,\n        markdown_generator=DefaultMarkdownGenerator(\n            content_filter=PruningContentFilter(threshold=0.48, threshold_type=\"fixed\", min_word_threshold=0)\n        ),\n        # markdown_generator=DefaultMarkdownGenerator(\n        #     content_filter=BM25ContentFilter(user_query=\"WHEN_WE_FOCUS_BASED_ON_A_USER_QUERY\", bm25_threshold=1.0)\n        # ),\n    )\n    \n    async with AsyncWebCrawler(config=browser_config) as crawler:\n        result = await crawler.arun(\n            url=\"https://docs.micronaut.io/4.7.6/guide/\",\n            config=run_config\n        )\n        print(len(result.markdown.raw_markdown))\n        print(len(result.markdown.fit_markdown))\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e🖥️ \u003cstrong\u003eExecuting JavaScript \u0026 Extract Structured Data without LLMs\u003c/strong\u003e\u003c/summary\u003e\n\n```python\nimport asyncio\nfrom crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode\nfrom crawl4ai import JsonCssExtractionStrategy\nimport json\n\nasync def main():\n    schema = {\n    \"name\": \"KidoCode Courses\",\n    \"baseSelector\": \"section.charge-methodology .w-tab-content \u003e div\",\n    \"fields\": [\n        {\n            \"name\": \"section_title\",\n            \"selector\": \"h3.heading-50\",\n            \"type\": \"text\",\n        },\n        {\n            \"name\": \"section_description\",\n            \"selector\": \".charge-content\",\n            \"type\": \"text\",\n        },\n        {\n            \"name\": \"course_name\",\n            \"selector\": \".text-block-93\",\n            \"type\": \"text\",\n        },\n        {\n            \"name\": \"course_description\",\n            \"selector\": \".course-content-text\",\n            \"type\": \"text\",\n        },\n        {\n            \"name\": \"course_icon\",\n            \"selector\": \".image-92\",\n            \"type\": \"attribute\",\n            \"attribute\": \"src\"\n        }\n    }\n}\n\n    extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)\n\n    browser_config = BrowserConfig(\n        headless=False,\n        verbose=True\n    )\n    run_config = CrawlerRunConfig(\n        extraction_strategy=extraction_strategy,\n        js_code=[\"\"\"(async () =\u003e {const tabs = document.querySelectorAll(\"section.charge-methodology .tabs-menu-3 \u003e div\");for(let tab of tabs) {tab.scrollIntoView();tab.click();await new Promise(r =\u003e setTimeout(r, 500));}})();\"\"\"],\n        cache_mode=CacheMode.BYPASS\n    )\n        \n    async with AsyncWebCrawler(config=browser_config) as crawler:\n        \n        result = await crawler.arun(\n            url=\"https://www.kidocode.com/degrees/technology\",\n            config=run_config\n        )\n\n        companies = json.loads(result.extracted_content)\n        print(f\"Successfully extracted {len(companies)} companies\")\n        print(json.dumps(companies[0], indent=2))\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e📚 \u003cstrong\u003eExtracting Structured Data with LLMs\u003c/strong\u003e\u003c/summary\u003e\n\n```python\nimport os\nimport asyncio\nfrom crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode, LLMConfig\nfrom crawl4ai import LLMExtractionStrategy\nfrom pydantic import BaseModel, Field\n\nclass OpenAIModelFee(BaseModel):\n    model_name: str = Field(..., description=\"Name of the OpenAI model.\")\n    input_fee: str = Field(..., description=\"Fee for input token for the OpenAI model.\")\n    output_fee: str = Field(..., description=\"Fee for output token for the OpenAI model.\")\n\nasync def main():\n    browser_config = BrowserConfig(verbose=True)\n    run_config = CrawlerRunConfig(\n        word_count_threshold=1,\n        extraction_strategy=LLMExtractionStrategy(\n            # Here you can use any provider that Litellm library supports, for instance: ollama/qwen2\n            # provider=\"ollama/qwen2\", api_token=\"no-token\", \n            llm_config = LLMConfig(provider=\"openai/gpt-4o\", api_token=os.getenv('OPENAI_API_KEY')), \n            schema=OpenAIModelFee.schema(),\n            extraction_type=\"schema\",\n            instruction=\"\"\"From the crawled content, extract all mentioned model names along with their fees for input and output tokens. \n            Do not miss any models in the entire content. One extracted model JSON format should look like this: \n            {\"model_name\": \"GPT-4\", \"input_fee\": \"US$10.00 / 1M tokens\", \"output_fee\": \"US$30.00 / 1M tokens\"}.\"\"\"\n        ),            \n        cache_mode=CacheMode.BYPASS,\n    )\n    \n    async with AsyncWebCrawler(config=browser_config) as crawler:\n        result = await crawler.arun(\n            url='https://openai.com/api/pricing/',\n            config=run_config\n        )\n        print(result.extracted_content)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e🤖 \u003cstrong\u003eUsing Your own Browser with Custom User Profile\u003c/strong\u003e\u003c/summary\u003e\n\n```python\nimport os, sys\nfrom pathlib import Path\nimport asyncio, time\nfrom crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode\n\nasync def test_news_crawl():\n    # Create a persistent user data directory\n    user_data_dir = os.path.join(Path.home(), \".crawl4ai\", \"browser_profile\")\n    os.makedirs(user_data_dir, exist_ok=True)\n\n    browser_config = BrowserConfig(\n        verbose=True,\n        headless=True,\n        user_data_dir=user_data_dir,\n        use_persistent_context=True,\n    )\n    run_config = CrawlerRunConfig(\n        cache_mode=CacheMode.BYPASS\n    )\n    \n    async with AsyncWebCrawler(config=browser_config) as crawler:\n        url = \"ADDRESS_OF_A_CHALLENGING_WEBSITE\"\n        \n        result = await crawler.arun(\n            url,\n            config=run_config,\n            magic=True,\n        )\n        \n        print(f\"Successfully crawled {url}\")\n        print(f\"Content length: {len(result.markdown)}\")\n```\n\n\u003c/details\u003e\n\n## ✨ Recent Updates\n\n### Version 0.7.0 Release Highlights - The Adaptive Intelligence Update\n\n- **🧠 Adaptive Crawling**: Your crawler now learns and adapts to website patterns automatically:\n  ```python\n  config = AdaptiveConfig(\n      confidence_threshold=0.7, # Min confidence to stop crawling\n      max_depth=5, # Maximum crawl depth\n      max_pages=20, # Maximum number of pages to crawl\n      strategy=\"statistical\"\n  )\n  \n  async with AsyncWebCrawler() as crawler:\n      adaptive_crawler = AdaptiveCrawler(crawler, config)\n      state = await adaptive_crawler.digest(\n          start_url=\"https://news.example.com\",\n          query=\"latest news content\"\n      )\n  # Crawler learns patterns and improves extraction over time\n  ```\n\n- **🌊 Virtual Scroll Support**: Complete content extraction from infinite scroll pages:\n  ```python\n  scroll_config = VirtualScrollConfig(\n      container_selector=\"[data-testid='feed']\",\n      scroll_count=20,\n      scroll_by=\"container_height\",\n      wait_after_scroll=1.0\n  )\n  \n  result = await crawler.arun(url, config=CrawlerRunConfig(\n      virtual_scroll_config=scroll_config\n  ))\n  ```\n\n- **🔗 Intelligent Link Analysis**: 3-layer scoring system for smart link prioritization:\n  ```python\n  link_config = LinkPreviewConfig(\n      query=\"machine learning tutorials\",\n      score_threshold=0.3,\n      concurrent_requests=10\n  )\n  \n  result = await crawler.arun(url, config=CrawlerRunConfig(\n      link_preview_config=link_config,\n      score_links=True\n  ))\n  # Links ranked by relevance and quality\n  ```\n\n- **🎣 Async URL Seeder**: Discover thousands of URLs in seconds:\n  ```python\n  seeder = AsyncUrlSeeder(SeedingConfig(\n      source=\"sitemap+cc\",\n      pattern=\"*/blog/*\",\n      query=\"python tutorials\",\n      score_threshold=0.4\n  ))\n  \n  urls = await seeder.discover(\"https://example.com\")\n  ```\n\n- **⚡ Performance Boost**: Up to 3x faster with optimized resource handling and memory efficiency\n\nRead the full details in our [0.7.0 Release Notes](https://docs.crawl4ai.com/blog/release-v0.7.0) or check the [CHANGELOG](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md).\n\n## Version Numbering in Crawl4AI\n\nCrawl4AI follows standard Python version numbering conventions (PEP 440) to help users understand the stability and features of each release.\n\n### Version Numbers Explained\n\nOur version numbers follow this pattern: `MAJOR.MINOR.PATCH` (e.g., 0.4.3)\n\n#### Pre-release Versions\nWe use different suffixes to indicate development stages:\n\n- `dev` (0.4.3dev1): Development versions, unstable\n- `a` (0.4.3a1): Alpha releases, experimental features\n- `b` (0.4.3b1): Beta releases, feature complete but needs testing\n- `rc` (0.4.3): Release candidates, potential final version\n\n#### Installation\n- Regular installation (stable version):\n  ```bash\n  pip install -U crawl4ai\n  ```\n\n- Install pre-release versions:\n  ```bash\n  pip install crawl4ai --pre\n  ```\n\n- Install specific version:\n  ```bash\n  pip install crawl4ai==0.4.3b1\n  ```\n\n#### Why Pre-releases?\nWe use pre-releases to:\n- Test new features in real-world scenarios\n- Gather feedback before final releases\n- Ensure stability for production users\n- Allow early adopters to try new features\n\nFor production environments, we recommend using the stable version. For testing new features, you can opt-in to pre-releases using the `--pre` flag.\n\n## 📖 Documentation \u0026 Roadmap \n\n\u003e 🚨 **Documentation Update Alert**: We're undertaking a major documentation overhaul next week to reflect recent updates and improvements. Stay tuned for a more comprehensive and up-to-date guide!\n\nFor current documentation, including installation instructions, advanced features, and API reference, visit our [Documentation Website](https://docs.crawl4ai.com/).\n\nTo check our development plans and upcoming features, visit our [Roadmap](https://github.com/unclecode/crawl4ai/blob/main/ROADMAP.md).\n\n\u003cdetails\u003e\n\u003csummary\u003e📈 \u003cstrong\u003eDevelopment TODOs\u003c/strong\u003e\u003c/summary\u003e\n\n- [x] 0. Graph Crawler: Smart website traversal using graph search algorithms for comprehensive nested page extraction\n- [ ] 1. Question-Based Crawler: Natural language driven web discovery and content extraction\n- [ ] 2. Knowledge-Optimal Crawler: Smart crawling that maximizes knowledge while minimizing data extraction\n- [ ] 3. Agentic Crawler: Autonomous system for complex multi-step crawling operations\n- [ ] 4. Automated Schema Generator: Convert natural language to extraction schemas\n- [ ] 5. Domain-Specific Scrapers: Pre-configured extractors for common platforms (academic, e-commerce)\n- [ ] 6. Web Embedding Index: Semantic search infrastructure for crawled content\n- [ ] 7. Interactive Playground: Web UI for testing, comparing strategies with AI assistance\n- [ ] 8. Performance Monitor: Real-time insights into crawler operations\n- [ ] 9. Cloud Integration: One-click deployment solutions across cloud providers\n- [ ] 10. Sponsorship Program: Structured support system with tiered benefits\n- [ ] 11. Educational Content: \"How to Crawl\" video series and interactive tutorials\n\n\u003c/details\u003e\n\n## 🤝 Contributing \n\nWe welcome contributions from the open-source community. Check out our [contribution guidelines](https://github.com/unclecode/crawl4ai/blob/main/CONTRIBUTORS.md) for more information.\n\nI'll help modify the license section with badges. For the halftone effect, here's a version with it:\n\nHere's the updated license section:\n\n## 📄 License \u0026 Attribution\n\nThis project is licensed under the Apache License 2.0, attribution is recommended via the badges below. See the [Apache 2.0 License](https://github.com/unclecode/crawl4ai/blob/main/LICENSE) file for details.\n\n### Attribution Requirements\nWhen using Crawl4AI, you must include one of the following attribution methods:\n\n#### 1. Badge Attribution (Recommended)\nAdd one of these badges to your README, documentation, or website:\n\n| Theme | Badge |\n|-------|-------|\n| **Disco Theme (Animated)** | \u003ca href=\"https://github.com/unclecode/crawl4ai\"\u003e\u003cimg src=\"./docs/assets/powered-by-disco.svg\" alt=\"Powered by Crawl4AI\" width=\"200\"/\u003e\u003c/a\u003e |\n| **Night Theme (Dark with Neon)** | \u003ca href=\"https://github.com/unclecode/crawl4ai\"\u003e\u003cimg src=\"./docs/assets/powered-by-night.svg\" alt=\"Powered by Crawl4AI\" width=\"200\"/\u003e\u003c/a\u003e |\n| **Dark Theme (Classic)** | \u003ca href=\"https://github.com/unclecode/crawl4ai\"\u003e\u003cimg src=\"./docs/assets/powered-by-dark.svg\" alt=\"Powered by Crawl4AI\" width=\"200\"/\u003e\u003c/a\u003e |\n| **Light Theme (Classic)** | \u003ca href=\"https://github.com/unclecode/crawl4ai\"\u003e\u003cimg src=\"./docs/assets/powered-by-light.svg\" alt=\"Powered by Crawl4AI\" width=\"200\"/\u003e\u003c/a\u003e |\n \n\nHTML code for adding the badges:\n```html\n\u003c!-- Disco Theme (Animated) --\u003e\n\u003ca href=\"https://github.com/unclecode/crawl4ai\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/unclecode/crawl4ai/main/docs/assets/powered-by-disco.svg\" alt=\"Powered by Crawl4AI\" width=\"200\"/\u003e\n\u003c/a\u003e\n\n\u003c!-- Night Theme (Dark with Neon) --\u003e\n\u003ca href=\"https://github.com/unclecode/crawl4ai\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/unclecode/crawl4ai/main/docs/assets/powered-by-night.svg\" alt=\"Powered by Crawl4AI\" width=\"200\"/\u003e\n\u003c/a\u003e\n\n\u003c!-- Dark Theme (Classic) --\u003e\n\u003ca href=\"https://github.com/unclecode/crawl4ai\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/unclecode/crawl4ai/main/docs/assets/powered-by-dark.svg\" alt=\"Powered by Crawl4AI\" width=\"200\"/\u003e\n\u003c/a\u003e\n\n\u003c!-- Light Theme (Classic) --\u003e\n\u003ca href=\"https://github.com/unclecode/crawl4ai\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/unclecode/crawl4ai/main/docs/assets/powered-by-light.svg\" alt=\"Powered by Crawl4AI\" width=\"200\"/\u003e\n\u003c/a\u003e\n\n\u003c!-- Simple Shield Badge --\u003e\n\u003ca href=\"https://github.com/unclecode/crawl4ai\"\u003e\n  \u003cimg src=\"https://img.shields.io/badge/Powered%20by-Crawl4AI-blue?style=flat-square\" alt=\"Powered by Crawl4AI\"/\u003e\n\u003c/a\u003e\n```\n\n#### 2. Text Attribution\nAdd this line to your documentation:\n```\nThis project uses Crawl4AI (https://github.com/unclecode/crawl4ai) for web data extraction.\n```\n\n## 📚 Citation\n\nIf you use Crawl4AI in your research or project, please cite:\n\n```bibtex\n@software{crawl4ai2024,\n  author = {UncleCode},\n  title = {Crawl4AI: Open-source LLM Friendly Web Crawler \u0026 Scraper},\n  year = {2024},\n  publisher = {GitHub},\n  journal = {GitHub Repository},\n  howpublished = {\\url{https://github.com/unclecode/crawl4ai}},\n  commit = {Please use the commit hash you're working with}\n}\n```\n\nText citation format:\n```\nUncleCode. (2024). Crawl4AI: Open-source LLM Friendly Web Crawler \u0026 Scraper [Computer software]. \nGitHub. https://github.com/unclecode/crawl4ai\n```\n\n## 📧 Contact \n\nFor questions, suggestions, or feedback, feel free to reach out:\n\n- GitHub: [unclecode](https://github.com/unclecode)\n- Twitter: [@unclecode](https://twitter.com/unclecode)\n- Website: [crawl4ai.com](https://crawl4ai.com)\n\nHappy Crawling! 🕸️🚀\n\n## 💖 Support Crawl4AI\n\n\u003e 🎉 **Sponsorship Program Just Launched!** Be among the first 50 **Founding Sponsors** and get permanent recognition in our Hall of Fame!\n\nCrawl4AI is the #1 trending open-source web crawler with 51K+ stars. Your support ensures we stay independent, innovative, and free forever.\n\n\u003cdiv align=\"center\"\u003e\n\n[![Become a Sponsor](https://img.shields.io/badge/Become%20a%20Sponsor-pink?style=for-the-badge\u0026logo=github-sponsors\u0026logoColor=white)](https://github.com/sponsors/unclecode)\n[![Current Sponsors](https://img.shields.io/github/sponsors/unclecode?style=for-the-badge\u0026logo=github\u0026label=Current%20Sponsors\u0026color=green)](https://github.com/sponsors/unclecode)\n\n\u003c/div\u003e\n\n### 🤝 Sponsorship Tiers\n\n- **🌱 Believer ($5/mo)**: Join the movement for data democratization\n- **🚀 Builder ($50/mo)**: Get priority support and early feature access  \n- **💼 Growing Team ($500/mo)**: Bi-weekly syncs and optimization help\n- **🏢 Data Infrastructure Partner ($2000/mo)**: Full partnership with dedicated support\n\n**Why sponsor?** Every tier includes real benefits. No more rate-limited APIs. Own your data pipeline. Build data sovereignty together.\n\n[View All Tiers \u0026 Benefits →](https://github.com/sponsors/unclecode)\n\n### 🏆 Our Sponsors\n\n#### 👑 Founding Sponsors (First 50)\n*Be part of history - [Become a Founding Sponsor](https://github.com/sponsors/unclecode)*\n\n\u003c!-- Founding sponsors will be permanently recognized here --\u003e\n\n#### Current Sponsors\nThank you to all our sponsors who make this project possible!\n\n\u003c!-- Sponsors will be automatically added here --\u003e\n\n## 🗾 Mission\n\nOur mission is to unlock the value of personal and enterprise data by transforming digital footprints into structured, tradeable assets. Crawl4AI empowers individuals and organizations with open-source tools to extract and structure data, fostering a shared data economy.  \n\nWe envision a future where AI is powered by real human knowledge, ensuring data creators directly benefit from their contributions. By democratizing data and enabling ethical sharing, we are laying the foundation for authentic AI advancement.\n\n\u003cdetails\u003e\n\u003csummary\u003e🔑 \u003cstrong\u003eKey Opportunities\u003c/strong\u003e\u003c/summary\u003e\n \n- **Data Capitalization**: Transform digital footprints into measurable, valuable assets.  \n- **Authentic AI Data**: Provide AI systems with real human insights.  \n- **Shared Economy**: Create a fair data marketplace that benefits data creators.  \n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e🚀 \u003cstrong\u003eDevelopment Pathway\u003c/strong\u003e\u003c/summary\u003e\n\n1. **Open-Source Tools**: Community-driven platforms for transparent data extraction.  \n2. **Digital Asset Structuring**: Tools to organize and value digital knowledge.  \n3. **Ethical Data Marketplace**: A secure, fair platform for exchanging structured data.  \n\nFor more details, see our [full mission statement](./MISSION.md).\n\u003c/details\u003e\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=unclecode/crawl4ai\u0026type=Date)](https://star-history.com/#unclecode/crawl4ai\u0026Date)\n","funding_links":["https://github.com/sponsors/unclecode"],"categories":["Data Collection Tools","Python","AI Agent","🚀 Specialized Agents","AI Web Scraping \u0026 Automation","Web Scraping","others","🤖 AI-Powered Scraping","HTML","Recently Updated","网络服务","开源工具","HarmonyOS","Repos","Everything to Markdown to LLMs","🔥LLM Extraction / Parsing","🎯 项目简介","App","🕸️ Web Scraping \u0026 Crawling","🕷️ Web Scraping \u0026 Data Extraction for AI","Agent Infrastructure","Web Crawling","Tools","信息获取"],"sub_categories":["Web Scraping","Web crawler","🌐 Web Agents","[Sep 27, 2024](/content/2024/09/27/README.md)","网络爬虫","爬虫","Windows Manager","Contribute to our Repository","Tools","Notable MCP Servers","Data Extraction","Agent-Readable Content","爬虫与抓取框架"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Funclecode%2Fcrawl4ai","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Funclecode%2Fcrawl4ai","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Funclecode%2Fcrawl4ai/lists"}