{"id":25843011,"url":"https://github.com/anyparser/anyparser_crewai","last_synced_at":"2026-02-15T05:32:15.571Z","repository":{"id":277987332,"uuid":"934158777","full_name":"anyparser/anyparser_crewai","owner":"anyparser","description":"Supercharge your AI workflows by combining Anyparser’s advanced content extraction with Crew AI. With this integration, you can effortlessly leverage Anyparser’s document processing and data extraction tools within your Crew AI applications.","archived":false,"fork":false,"pushed_at":"2025-02-17T11:22:53.000Z","size":439,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"master","last_synced_at":"2025-10-04T20:57:52.796Z","etag":null,"topics":["anyparser","artificial-intelligence","cache-augmented-generation","cag","crew-ai","crew-ai-rag","crewai","crewai-rag","document-parser","document-parsing","kag","knowledge-graph","python","rag","retrieval-augmented-generation","typescript"],"latest_commit_sha":null,"homepage":"https://anyparser.com","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/anyparser.png","metadata":{"files":{"readme":"README.md","changelog":"changelogs/v0.0.1-changelog.md","contributing":null,"funding":null,"license":"LICENSE.md","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2025-02-17T11:21:35.000Z","updated_at":"2025-02-17T15:37:36.000Z","dependencies_parsed_at":null,"dependency_job_id":"ca979431-9e60-448f-af10-09b2aa24684e","html_url":"https://github.com/anyparser/anyparser_crewai","commit_stats":null,"previous_names":["anyparser/anyparser_crewai"],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/anyparser/anyparser_crewai","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/anyparser%2Fanyparser_crewai","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/anyparser%2Fanyparser_crewai/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/anyparser%2Fanyparser_crewai/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/anyparser%2Fanyparser_crewai/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/anyparser","download_url":"https://codeload.github.com/anyparser/anyparser_crewai/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/anyparser%2Fanyparser_crewai/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29470613,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-15T05:26:30.465Z","status":"ssl_error","status_checked_at":"2026-02-15T05:26:21.858Z","response_time":118,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["anyparser","artificial-intelligence","cache-augmented-generation","cag","crew-ai","crew-ai-rag","crewai","crewai-rag","document-parser","document-parsing","kag","knowledge-graph","python","rag","retrieval-augmented-generation","typescript"],"created_at":"2025-03-01T06:35:44.217Z","updated_at":"2026-02-15T05:32:15.565Z","avatar_url":"https://github.com/anyparser.png","language":"Python","readme":"# Anyparser CrewAI\n\nhttps://anyparser.com\n\n**Integrate Anyparser's powerful content extraction capabilities with CrewAI for enhanced AI workflows.** This integration package enables seamless use of Anyparser's document processing and data extraction features within your CrewAI applications, making it easier than ever to build sophisticated AI pipelines.\n\n## Installation\n\n```bash\npip install anyparser-crewai\n```\n\n## Setup\n\nBefore running the examples, make sure to set your Anyparser API credentials as environment variables:\n\n```bash\nexport ANYPARSER_API_KEY=\"your-api-key\"\nexport ANYPARSER_API_URL=\"https://anyparserapi.com\"\n```\n\n## Anyparser LangChain Examples\n\nThis `examples` directory contains examples demonstrating different ways to use the Anyparser LangChain integration.\n\n```bash\npython examples/01_single_file_markdown.py\npython examples/02_single_file_json.py\npython examples/03_multiple_files_markdown.py\npython examples/04_multiple_files_json.py\npython examples/05_directory_read.py\npython examples/06_ocr_markdown.py\npython examples/07_ocr_json.py\npython examples/08_web_crawler.py\npython examples/09_web_crawler_json.py\n```\n\n## Examples\n\n### 1. Single File Processing\n- `01_single_file_markdown.py`: Process a single file with markdown output\n- `02_single_file_json.py`: Process a single file with JSON output\n\n### 2. Multiple File Processing\n- `03_multiple_files_markdown.py`: Process multiple files with markdown output\n- `04_multiple_files_json.py`: Process multiple files with JSON output\n- `05_directory_read.py`: Load and process all files from a folder (max 5 files)\n\n### 3. OCR Processing\n- `06_ocr_markdown.py`: Process images/scans with OCR (markdown output)\n- `07_ocr_json.py`: Process images/scans with OCR (JSON output)\n\n### 4. Web Crawling\n- `08_web_crawler.py`: Basic web crawling with essential settings (markdown)\n- `09_web_crawler_json.py`: Web crawling (JSON)\n\n## Features Demonstrated\n\n### Document Processing\n- Different output formats (markdown, JSON)\n- Multiple file handling\n- Folder processing\n- Metadata handling\n\n### Web Crawling\n- Basic crawling with depth and scope control\n- Advanced URL and content filtering\n- Crawling strategies (BFS, LIFO)\n- Rate limiting and robots.txt respect\n\n## Notes\n\n- All examples use async/await for better performance\n- Error handling is included in all examples\n- Each example includes detailed comments explaining the options used\n- OCR examples support multiple languages\n- Crawler examples demonstrate various filtering and control options\n\n## Features Demonstrated\n\n- Different output formats (markdown, JSON)\n- OCR capabilities with language support\n- OCR performance presets\n- Image extraction\n- Table extraction\n- Metadata handling\n- Error handling\n- Async/await usage\n\n## License\n\nApache-2.0\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fanyparser%2Fanyparser_crewai","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fanyparser%2Fanyparser_crewai","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fanyparser%2Fanyparser_crewai/lists"}