{"id":35727246,"url":"https://github.com/rushi-balapure/pdf_2_json_extractor","last_synced_at":"2026-04-21T04:01:40.266Z","repository":{"id":307146957,"uuid":"1027464258","full_name":"Rushi-Balapure/pdf_2_json_extractor","owner":"Rushi-Balapure","description":"A high-performance Python library for extracting structured content from PDF documents with layout-aware text extraction. pdf_to_json preserves document structure including headings (H1-H6) and body text, outputting clean JSON format.","archived":false,"fork":false,"pushed_at":"2026-04-21T02:11:56.000Z","size":1768,"stargazers_count":3,"open_issues_count":0,"forks_count":1,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-21T03:54:32.406Z","etag":null,"topics":["cli-tool","cpu-only","cross-platform","data-extraction","document-parsing","document-processing","json","layout-analysis","nlp","offline","pdf","pdf-extraction","pdf-parser","pdf-processing","pdf-to-json","python","python-library","structure-extraction","text-extraction"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Rushi-Balapure.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-07-28T03:56:41.000Z","updated_at":"2026-04-21T02:11:56.000Z","dependencies_parsed_at":"2025-07-29T19:28:59.154Z","dependency_job_id":"566315da-13c7-478c-96ce-f94934cb0b67","html_url":"https://github.com/Rushi-Balapure/pdf_2_json_extractor","commit_stats":null,"previous_names":["rushi-balapure/pdf-reader","rushi-balapure/pdf_to_json","rushi-balapure/pdf_2_json_extractor"],"tags_count":5,"template":false,"template_full_name":null,"purl":"pkg:github/Rushi-Balapure/pdf_2_json_extractor","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Rushi-Balapure%2Fpdf_2_json_extractor","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Rushi-Balapure%2Fpdf_2_json_extractor/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Rushi-Balapure%2Fpdf_2_json_extractor/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Rushi-Balapure%2Fpdf_2_json_extractor/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Rushi-Balapure","download_url":"https://codeload.github.com/Rushi-Balapure/pdf_2_json_extractor/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Rushi-Balapure%2Fpdf_2_json_extractor/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":32076295,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-21T02:38:07.213Z","status":"ssl_error","status_checked_at":"2026-04-21T02:38:06.559Z","response_time":128,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cli-tool","cpu-only","cross-platform","data-extraction","document-parsing","document-processing","json","layout-analysis","nlp","offline","pdf","pdf-extraction","pdf-parser","pdf-processing","pdf-to-json","python","python-library","structure-extraction","text-extraction"],"created_at":"2026-01-06T09:15:11.494Z","updated_at":"2026-04-21T04:01:40.262Z","avatar_url":"https://github.com/Rushi-Balapure.png","language":"Python","readme":"# pdf_2_json_extractor\n\n[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](LICENSE)\n[![Python Version](https://img.shields.io/badge/python-3.8%2B-blue.svg)](https://python.org)\n[![PyPI Version](https://img.shields.io/pypi/v/pdf_2_json_extractor.svg)](https://pypi.org/project/pdf_2_json_extractor/)\n[![Coverage Status](https://coveralls.io/repos/github/Rushi-Balapure/pdf_2_json_extractor/badge.svg?branch=main)](https://coveralls.io/github/Rushi-Balapure/pdf_2_json_extractor?branch=main)\n\nA high-performance Python library for extracting structured content from PDF documents with layout-aware text extraction. pdf_2_json_extractor preserves document structure including headings (H1-H6) and body text, outputting clean JSON format.\n\n## Features\n\n- **Layout-aware extraction**: Detects document structure including headings of different levels using font size and style analysis\n- **Multilingual support**: Handles Latin, Cyrillic, Asian scripts (Chinese, Japanese, Korean), Arabic, Hebrew, and other complex Unicode scripts\n- **High performance**: Processes 50-page PDFs in ≤10 seconds on modern CPUs\n- **Small footprint**: Minimal dependencies, no heavy ML models used\n- **Offline operation**: No internet connectivity required to run\n- **Cross-platform**: AMD64 compatible, runs purely on CPU\n- **Easy to use**: Simple API with both programmatic and CLI interfaces\n\n## Installation\n\n```bash\npip install pdf_2_json_extractor\n```\n\n## Quick Start\n\n### Python API\n\n```python\nimport pdf_2_json_extractor\n\n# Extract PDF to dictionary\nresult = pdf_2_json_extractor.extract_pdf_to_dict(\"document.pdf\")\nprint(f\"Title: {result['title']}\")\nprint(f\"Number of sections: {result['stats']['num_sections']}\")\n\n# Extract PDF to JSON string\njson_output = pdf_2_json_extractor.extract_pdf_to_json(\"document.pdf\")\nprint(json_output)\n\n# Save to file\npdf_2_json_extractor.extract_pdf_to_json(\"document.pdf\", \"output.json\")\n```\n\n### Command Line Interface\n\n```bash\n# Extract to stdout\npdf_2_json_extractor document.pdf\n\n# Save to file\npdf_2_json_extractor document.pdf -o output.json\n\n# Compact output\npdf_2_json_extractor document.pdf --compact\n\n# Pretty print (default)\npdf_2_json_extractor document.pdf --pretty\n```\n\n## JSON Output Format\n\n```json\n{\n  \"title\": \"Document Title\",\n  \"sections\": [\n    {\n      \"level\": \"H1\",\n      \"title\": \"Chapter 1: Introduction\",\n      \"paragraphs\": [\"This is the introduction text...\"]\n    },\n    {\n      \"level\": \"H2\", \n      \"title\": \"1.1 Overview\",\n      \"paragraphs\": [\"Overview content...\"]\n    },\n    {\n      \"level\": \"content\",\n      \"title\": null,\n      \"paragraphs\": [\"Body text content...\"]\n    }\n  ],\n  \"font_histogram\": {\n    \"12.0\": 1500,\n    \"14.0\": 200,\n    \"16.0\": 50\n  },\n  \"heading_levels\": {\n    \"16.0\": \"H1\",\n    \"14.0\": \"H2\"\n  },\n  \"stats\": {\n    \"page_count\": 25,\n    \"processing_time\": 2.34,\n    \"num_sections\": 15,\n    \"num_headings\": 8,\n    \"num_paragraphs\": 45\n  }\n}\n```\n\n## Advanced Usage\n\n### Custom Configuration\n\n```python\nfrom pdf_2_json_extractor import PDFStructureExtractor, Config\n\n# Create custom configuration\nconfig = Config()\nconfig.MAX_PAGES_FOR_FONT_ANALYSIS = 5\nconfig.MIN_HEADING_FREQUENCY = 0.002\n\n# Use with custom config\nextractor = PDFStructureExtractor(config)\nresult = extractor.extract_text_with_structure(\"document.pdf\")\n```\n\n### Error Handling\n\n```python\nfrom pdf_2_json_extractor import extract_pdf_to_dict\nfrom pdf_2_json_extractor.exceptions import PdfToJsonError, InvalidPDFError, PDFFileNotFoundError\n\ntry:\n    result = extract_pdf_to_dict(\"document.pdf\")\nexcept PDFFileNotFoundError:\n    print(\"PDF file not found\")\nexcept InvalidPDFError:\n    print(\"Invalid or corrupted PDF file\")\nexcept PdfToJsonError as e:\n    print(f\"Processing error: {e}\")\n```\n\n## Configuration Options\n\nYou can configure pdf_2_json_extractor using environment variables:\n\n```bash\n# Font analysis settings\nexport PDF_TO_JSON_MAX_PAGES_FOR_FONT_ANALYSIS=10\nexport PDF_TO_JSON_FONT_SIZE_PRECISION=0.1\nexport PDF_TO_JSON_MIN_HEADING_FREQUENCY=0.001\n\n# Text processing settings\nexport PDF_TO_JSON_MIN_TEXT_LENGTH=3\nexport PDF_TO_JSON_MAX_HEADING_LEVELS=6\nexport PDF_TO_JSON_COMBINE_CONSECUTIVE_TEXT=True\n\n# Language support\nexport PDF_TO_JSON_MULTILINGUAL_SUPPORT=True\nexport PDF_TO_JSON_DEFAULT_ENCODING=utf-8\n\n# Performance settings\nexport PDF_TO_JSON_PROCESS_PAGES_IN_CHUNKS=False\nexport PDF_TO_JSON_CHUNK_SIZE=10\n\n# Debug settings\nexport PDF_TO_JSON_DEBUG_MODE=False\nexport PDF_TO_JSON_LOG_LEVEL=INFO\n```\n\n## Development\n\n### Installation from Source\n\n```bash\npip install pdf_2_json_extractor\n```\nor\n\n```bash\ngit clone https://github.com/your-username/pdf_2_json_extractor.git\ncd pdf_2_json_extractor\npip install -e .\n```\n\n### Building the Library\n\n```bash\n# Build the package\n./build.sh\n\n# Or manually\npython -m build\n```\n\n### Running Tests\n\n```bash\npip install -e \".[dev]\"\npytest\n```\n\n### Docker Development\n\n```bash\n# Build Docker image\ndocker build -t pdf_2_json_extractor:latest .\n\n# Run with Docker\ndocker run --rm -v $(pwd)/test:/test pdf_2_json_extractor:latest /test/document.pdf\n```\n\n## Performance\n\npdf_2_json_extractor is optimized for high performance:\n\n- **CPU-only processing**: No GPU requirements\n- **Memory efficient**: Processes large documents without excessive memory usage\n- **Fast extraction**: Typical processing times:\n  - 10-page document: ~1-2 seconds\n  - 50-page document: ~5-10 seconds\n  - 100-page document: ~15-25 seconds\n\n## Supported Languages\n\npdf_2_json_extractor supports text extraction from PDFs containing:\n\n- Latin scripts (English, Spanish, French, German, etc.)\n- Cyrillic scripts (Russian, Bulgarian, Serbian, etc.)\n- Asian scripts (Chinese, Japanese, Korean)\n- Arabic and Hebrew scripts\n- Other Unicode scripts\n\n## License\n\nThis project is licensed under the **Apache License 2.0** - see the [LICENSE](LICENSE) file for details.\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.\n\n## References\n\nThis library is inspired by the research paper:\n\n**\"Layout-Aware Text Extraction from Full-text PDF of Scientific Articles\"**  \n_Cartic Ramakrishnan, Abhishek Patnia, Eduard Hovy, Gully APC Burns_  \nPublished in Source Code for Biology and Medicine (2012)  \n[Full Paper](http://www.scfbm.org/content/7/1/7)\n\n## Support\n\nFor questions, issues, or contributions:\n\n- 📧 Email: rishibalapure12@gmail.com\n- 🐛 Issues: [GitHub Issues](https://github.com/your-username/pdf_2_json_extractor/issues)\n- 📖 Documentation: [GitHub Wiki](https://github.com/your-username/pdf_2_json_extractor/wiki)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frushi-balapure%2Fpdf_2_json_extractor","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Frushi-balapure%2Fpdf_2_json_extractor","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frushi-balapure%2Fpdf_2_json_extractor/lists"}