{"id":29359363,"url":"https://github.com/chatdoc-com/OCRFlux","last_synced_at":"2025-07-09T07:03:19.622Z","repository":{"id":300181786,"uuid":"995812709","full_name":"chatdoc-com/OCRFlux","owner":"chatdoc-com","description":"OCRFlux is a lightweight yet powerful multimodal toolkit that significantly advances PDF-to-Markdown conversion, excelling in complex layout handling, complicated table parsing and cross-page content merging.","archived":false,"fork":false,"pushed_at":"2025-06-30T02:14:10.000Z","size":217,"stargazers_count":507,"open_issues_count":2,"forks_count":11,"subscribers_count":3,"default_branch":"main","last_synced_at":"2025-06-30T03:25:47.508Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/chatdoc-com.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-06-04T03:36:43.000Z","updated_at":"2025-06-30T03:03:05.000Z","dependencies_parsed_at":"2025-06-20T09:36:40.155Z","dependency_job_id":"89cc4789-9fbf-403a-b786-50119e7a882b","html_url":"https://github.com/chatdoc-com/OCRFlux","commit_stats":null,"previous_names":["chatdoc-com/ocrflux"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/chatdoc-com/OCRFlux","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chatdoc-com%2FOCRFlux","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chatdoc-com%2FOCRFlux/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chatdoc-com%2FOCRFlux/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chatdoc-com%2FOCRFlux/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/chatdoc-com","download_url":"https://codeload.github.com/chatdoc-com/OCRFlux/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chatdoc-com%2FOCRFlux/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":264411124,"owners_count":23603799,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-07-09T07:01:52.903Z","updated_at":"2025-07-09T07:03:19.613Z","avatar_url":"https://github.com/chatdoc-com.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n\u003cimg src=\"./images/OCRFlux.png\" alt=\"OCRFlux Logo\" width=\"300\"/\u003e\n\u003chr/\u003e\n\u003c/div\u003e\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://github.com/chatdoc-com/OCRFlux/blob/main/LICENSE\"\u003e\n    \u003cimg alt=\"GitHub License\" src=\"./images/license.svg\" height=\"20\"\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://github.com/chatdoc-com/OCRFlux/releases\"\u003e\n    \u003cimg alt=\"GitHub release\" src=\"./images/release.svg\" height=\"20\"\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://ocrflux.pdfparser.io/\"\u003e\n    \u003cimg alt=\"Demo\" src=\"./images/demo.svg\" height=\"20\"\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://discord.gg/F33mhsAqqg\"\u003e\n    \u003cimg alt=\"Discord\" src=\"./images/discord.svg\" height=\"20\"\u003e\n  \u003c/a\u003e\n\u003c/p\u003e\n\nOCRFlux is a multimodal large language model based toolkit for converting PDFs and images into clean, readable, plain Markdown text. It aims to push the current state-of-the-art to a significantly higher level.\n\nTry the online demo: [OCRFlux Demo](https://ocrflux.pdfparser.io/)\n\nFunctions: **Whole file parsing**\n- On each page\n    - Convert into text with a natural reading order, even in the presence of multi-column layouts, figures, and insets\n    - Support for complicated tables and equations\n    - Automatically removes headers and footers\n\n- Cross-page table/paragraph merging\n    - Cross-page table merging\n    - Cross-page paragraph merging\n\n\nKey features:\n- Superior parsing quality on each page\n\n    It respectively achieves 0.095 higher (from 0.872 to 0.967), 0.109 higher (from 0.858 to 0.967) and 0.187 higher (from 0.780 to 0.967) Edit Distance Similarity (EDS) on our released benchmark [OCRFlux-bench-single](https://huggingface.co/datasets/ChatDOC/OCRFlux-bench-single) than the baseline model [olmOCR-7B-0225-preview](https://huggingface.co/allenai/olmOCR-7B-0225-preview), [Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) and [MonkeyOCR](https://huggingface.co/echo840/MonkeyOCR).\n\n- Native support for cross-page table/paragraph merging  (to our best this is the first to support this feature in all the open sourced project).\n\n- Based on a 3B parameter VLM, so it can run even on GTX 3090 GPU.\n\nRelease:\n- [OCRFlux-3B](https://huggingface.co/ChatDOC/OCRFlux-3B) - 3B parameter VLM\n- Benchmark for evaluation\n    - [OCRFlux-bench-single](https://huggingface.co/datasets/ChatDOC/OCRFlux-bench-single)\n    - [OCRFlux-pubtabnet-single](https://huggingface.co/datasets/ChatDOC/OCRFlux-pubtabnet-single)\n    - [OCRFlux-bench-cross](https://huggingface.co/datasets/ChatDOC/OCRFlux-bench-cross)\n    - [OCRFlux-pubtabnet-cross](https://huggingface.co/datasets/ChatDOC/OCRFlux-pubtabnet-cross)\n\n\n### News\n - Jun 17, 2025 - v0.1.0 -  Initial public launch and demo.\n\n### Benchmark for single-page parsing\n\nWe ship two comprehensive benchmarks to help measure the performance of our OCR system in single-page parsing:\n\n  - [OCRFlux-bench-single](https://huggingface.co/datasets/ChatDOC/OCRFlux-bench-single): Containing 2000 pdf pages (1000 English pages and 1000 Chinese pages) and their ground-truth Markdowns (manually labeled with multi-round check).\n\n  - [OCRFlux-pubtabnet-single](https://huggingface.co/datasets/ChatDOC/OCRFlux-pubtabnet-single): Derived from the public [PubTabNet](https://github.com/ibm-aur-nlp/PubTabNet) benchmark with some format transformation. It contains 9064 HTML table samples, which are split into simple tables and complex tables according to whether they have rowspan and colspan cells.\n\nWe emphasize that the released benchmarks are NOT included in our training and evaluation data. The following is the main result:\n\n\n1. In [OCRFlux-bench-single](https://huggingface.co/datasets/ChatDOC/OCRFlux-bench-single), we calculated the Edit Distance Similarity (EDS) between the generated Markdowns and the ground-truth Markdowns as the metric.\n\n    \u003ctable\u003e\n      \u003cthead\u003e\n        \u003ctr\u003e\n          \u003cth\u003eLanguage\u003c/th\u003e\n          \u003cth\u003eModel\u003c/th\u003e\n          \u003cth\u003eAvg EDS ↑\u003c/th\u003e\n        \u003c/tr\u003e\n      \u003c/thead\u003e\n      \u003ctbody\u003e\n        \u003ctr\u003e\n          \u003ctd rowspan=\"4\"\u003eEnglish\u003c/td\u003e\n          \u003ctd\u003eolmOCR-7B-0225-preview\u003c/td\u003e\n          \u003ctd\u003e0.885\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003eNanonets-OCR-s\u003c/td\u003e\n          \u003ctd\u003e0.870\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003eMonkeyOCR\u003c/td\u003e\n          \u003ctd\u003e0.828\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003e\u003cstrong\u003e\u003ca href=\"https://huggingface.co/ChatDOC/OCRFlux-3B\"\u003eOCRFlux-3B\u003c/a\u003e\u003c/strong\u003e\u003c/td\u003e\n          \u003ctd\u003e0.971\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd rowspan=\"4\"\u003eChinese\u003c/td\u003e\n          \u003ctd\u003eolmOCR-7B-0225-preview\u003c/td\u003e\n          \u003ctd\u003e0.859\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003eNanonets-OCR-s\u003c/td\u003e\n          \u003ctd\u003e0.846\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003eMonkeyOCR\u003c/td\u003e\n          \u003ctd\u003e0.731\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003e\u003cstrong\u003e\u003ca href=\"https://huggingface.co/ChatDOC/OCRFlux-3B\"\u003eOCRFlux-3B\u003c/a\u003e\u003c/strong\u003e\u003c/td\u003e\n          \u003ctd\u003e0.962\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd rowspan=\"4\"\u003eTotal\u003c/td\u003e\n          \u003ctd\u003eolmOCR-7B-0225-preview\u003c/td\u003e\n          \u003ctd\u003e0.872\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003eNanonets-OCR-s\u003c/td\u003e\n          \u003ctd\u003e0.858\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003eMonkeyOCR\u003c/td\u003e\n          \u003ctd\u003e0.780\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003e\u003cstrong\u003e\u003ca href=\"https://huggingface.co/ChatDOC/OCRFlux-3B\"\u003eOCRFlux-3B\u003c/a\u003e\u003c/strong\u003e\u003c/td\u003e\n          \u003ctd\u003e0.967\u003c/td\u003e\n        \u003c/tr\u003e\n      \u003c/tbody\u003e\n    \u003c/table\u003e\n\n2. In [OCRFlux-pubtabnet-single](https://huggingface.co/datasets/ChatDOC/OCRFlux-pubtabnet-single), we calculated the Tree Edit Distance-based Similarity (TEDS) between the generated HTML tables and the ground-truth HTML tables as the metric.\n    \u003ctable\u003e\n      \u003cthead\u003e\n        \u003ctr\u003e\n          \u003cth\u003eType\u003c/th\u003e\n          \u003cth\u003eModel\u003c/th\u003e\n          \u003cth\u003eAvg TEDS ↑\u003c/th\u003e\n        \u003c/tr\u003e\n      \u003c/thead\u003e\n      \u003ctbody\u003e\n        \u003ctr\u003e\n          \u003ctd rowspan=\"4\"\u003eSimple\u003c/td\u003e\n          \u003ctd\u003eolmOCR-7B-0225-preview\u003c/td\u003e\n          \u003ctd\u003e0.810\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003eNanonets-OCR-s\u003c/td\u003e\n          \u003ctd\u003e0.882\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003eMonkeyOCR\u003c/td\u003e\n          \u003ctd\u003e0.880\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003e\u003cstrong\u003e\u003ca href=\"https://huggingface.co/ChatDOC/OCRFlux-3B\"\u003eOCRFlux-3B\u003c/a\u003e\u003c/strong\u003e\u003c/td\u003e\n          \u003ctd\u003e0.912\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd rowspan=\"4\"\u003eComplex\u003c/td\u003e\n          \u003ctd\u003eolmOCR-7B-0225-preview\u003c/td\u003e\n          \u003ctd\u003e0.676\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003eNanonets-OCR-s\u003c/td\u003e\n          \u003ctd\u003e0.772\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003e\u003cstrong\u003eMonkeyOCR\u003cstrong\u003e\u003c/td\u003e\n          \u003ctd\u003e0.826\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003e\u003ca href=\"https://huggingface.co/ChatDOC/OCRFlux-3B\"\u003eOCRFlux-3B\u003c/a\u003e\u003c/td\u003e\n          \u003ctd\u003e0.807\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd rowspan=\"4\"\u003eTotal\u003c/td\u003e\n          \u003ctd\u003eolmOCR-7B-0225-preview\u003c/td\u003e\n          \u003ctd\u003e0.744\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003eNanonets-OCR-s\u003c/td\u003e\n          \u003ctd\u003e0.828\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003eMonkeyOCR\u003c/td\u003e\n          \u003ctd\u003e0.853\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr\u003e\n          \u003ctd\u003e\u003cstrong\u003e\u003ca href=\"https://huggingface.co/ChatDOC/OCRFlux-3B\"\u003eOCRFlux-3B\u003c/a\u003e\u003c/strong\u003e\u003c/td\u003e\n          \u003ctd\u003e0.861\u003c/td\u003e\n        \u003c/tr\u003e\n      \u003c/tbody\u003e\n    \u003c/table\u003e\n\nWe also conduct some case studies to show the superiority of our model in the [blog](https://ocrflux.pdfparser.io/#/blog) article.\n\n### Benchmark for cross-page table/paragraph merging\n\nPDF documents are typically paginated, which often results in tables or paragraphs being split across consecutive pages. Accurately detecting and merging such cross-page structures is crucial to avoid generating incomplete or fragmented content. \n\nThe detection task can be formulated as follows: given the Markdowns of two consecutive pages—each structured as a list of Markdown elements (e.g., paragraphs and tables)—the goal is to identify the indexes of elements that should be merged across the pages.\n\nThen for the merging task, if the elements to be merged are paragraphs, we can just concate them. However, for two table fragments, their merging is much more challenging. For example, the table spanning multiple pages will repeat the header of the first page on the second page. Another difficult scenario is that the table cell contains long content that spans multiple lines within the cell, with the first few lines appearing on the previous page and the remaining lines continuing on the next page. We also observe some cases where tables with a large number of columns are split vertically and placed on two consecutive pages. More examples of cross-page tables can be found in our [blog](https://ocrflux.pdfparser.io/#/blog) article. To address these issues, we develop the LLM model for cross-page table merging. Specifically, this model takes two split table fragments as input and generates a complete, well-structured table as output.\n\nWe ship two comprehensive benchmarks to help measure the performance of our OCR system in cross-page table/paragraph detection and merging tasks respectively:\n\n  - [OCRFlux-bench-cross](https://huggingface.co/datasets/ChatDOC/OCRFlux-bench-cross): Containing 1000 samples (500 English samples and 500 Chinese samples), each sample contains the Markdown element lists of two consecutive pages, along with the indexes of elements that need to be merged (manually labeled through multiple rounds of review). If no tables or paragraphs require merging, the indexes in the annotation data are left empty.\n\n  - [OCRFlux-pubtabnet-cross](https://huggingface.co/datasets/ChatDOC/OCRFlux-pubtabnet-cross): Containing 9064 pairs of split table fragments, along with their corresponding ground-truth merged versions.\n\nThe released benchmarks are NOT included in our training and evaluation data neither. The following is the main result:\n\n1. In [OCRFlux-bench-cross](https://huggingface.co/datasets/ChatDOC/OCRFlux-bench-cross), we caculated the Accuracy, Precision, Recall and F1 score as the metric. Notice that the detection results are right only when it accurately judges whether there are elements that need to be merged across the two pages and output the right indexes of them.\n\n    | Language | Precision ↑ | Recall ↑ | F1 ↑  | Accuracy ↑ |\n    |----------|-------------|----------|-------|------------|\n    | English  | 0.992       | 0.964    | 0.978 | 0.978      |\n    | Chinese  | 1.000       | 0.988    | 0.994 | 0.994      |\n    | Total    | 0.996       | 0.976    | 0.986 | 0.986      |\n\n2. In [OCRFlux-pubtabnet-cross](https://huggingface.co/datasets/ChatDOC/OCRFlux-pubtabnet-cross), we calculate the Tree Edit Distance-based Similarity (TEDS) between the generated merged table and the ground-truth merged table as the metric.\n\n    | Table type | Avg TEDS ↑   |\n    |------------|--------------|\n    | Simple     | 0.965        |\n    | Complex    | 0.935        |\n    | Total      | 0.950        |\n\n### Installation\n\nRequirements:\n - Recent NVIDIA GPU (tested on RTX 3090, 4090, L40S, A100, H100) with at least 12 GB of GPU RAM\n - 20GB of free disk space\n\nYou will need to install poppler-utils and additional fonts for rendering PDF images.\n\nInstall dependencies (Ubuntu/Debian)\n```bash\nsudo apt-get update\nsudo apt-get install poppler-utils poppler-data ttf-mscorefonts-installer msttcorefonts fonts-crosextra-caladea fonts-crosextra-carlito gsfonts lcdf-typetools\n```\n\nSet up a conda environment and install OCRFlux. The requirements for running OCRFlux\nare difficult to install in an existing python environment, so please do make a clean python environment to install into.\n```bash\nconda create -n ocrflux python=3.11\nconda activate ocrflux\n\ngit clone https://github.com/chatdoc-com/OCRFlux.git\ncd ocrflux\n\npip install -e . --find-links https://flashinfer.ai/whl/cu124/torch2.5/flashinfer/\n```\n\n### Local Usage Example\n\nFor quick testing, try the [web demo](https://5f65ccdc2d4fd2f364.gradio.live). To run locally, a GPU is required, as inference is powered by [vllm](hhttps://github.com/vllm-project/vllm) under the hood.\n\n- For a pdf document:\n    ```bash\n    python -m ocrflux.pipeline ./localworkspace --data test.pdf --model /model_dir/OCRFlux-3B\n    ```\n\n- For an image:\n    ```bash\n    python -m ocrflux.pipeline ./localworkspace --data test_page.png --model /model_dir/OCRFlux-3B\n    ```\n\n- For a directory of pdf or images:\n    ```bash\n    python -m ocrflux.pipeline ./localworkspace --data test_pdf_dir/* --model /model_dir/OCRFlux-3B\n    ```\nYou can set `--skip_cross_page_merge` to skip the cross-page merging in the parsing process to accelerate, it would simply concatenate the parsing results of each page to generate final Markdown of the document.\n\nResults will be stored as JSONL files in the `./localworkspace/results` directory. \n\nEach line in JSONL files is a json object with the following fields:\n\n```\n{\n    \"orig_path\": str,  # the path to the raw pdf or image file\n    \"num_pages\": int,  # the number of pages in the pdf file\n    \"document_text\": str, # the Markdown text of the converted pdf or image file\n    \"page_texts\": dict, # the Markdown texts of each page in the pdf file, the key is the page index and the value is the Markdown text of the page\n    \"fallback_pages\": [int], # the page indexes that are not converted successfully\n}\n```\n\n### API for directly calling OCRFlux (New)\nYou can use the inference API to directly call OCRFlux in your codes without using an online vllm server like following:\n\n```\nfrom vllm import LLM\nfrom ocrflux.inference import parse\n\nfile_path = 'test.pdf'\n# file_path = 'test.png'\nllm = LLM(model=\"model_dir/OCRFlux-3B\",gpu_memory_utilization=0.8,max_model_len=8192)\nresult = parse(llm,file_path)\nif result != None:\n    document_markdown = result['document_text']\n    print(document_markdown)\n    with open('test.md','w') as f:\n        f.write(document_markdown)\nelse:\n    print(\"Parse failed.\")\n```\nIf parsing is failed or there are fallback pages in the result, you can try to set the argument `max_page_retries` for the `parse` function with a positive integer to get a better result. But it may cause longer inference time.\n\n### Docker Usage\n\nRequirements:\n\n- Docker with GPU support [(NVIDIA Toolkit)](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)\n- Pre-downloaded model: [OCRFlux-3B](https://huggingface.co/ChatDOC/OCRFlux-3B)\n\nTo use OCRFlux in a docker container, you can use the following example command:\n\n```bash\ndocker run -it --gpus all \\\n  -v /path/to/localworkspace:/localworkspace \\\n  -v /path/to/test_pdf_dir:/test_pdf_dir/ \\\n  -v /path/to/OCRFlux-3B:/OCRFlux-3B \\\n  chatdoc/ocrflux:latest /localworkspace --data /test_pdf_dir/* --model /OCRFlux-3B/\n```\n\n#### Viewing Results\nGenerate the final Markdown files by running the following command. Generated Markdown files will be in `./localworkspace/markdowns/DOCUMENT_NAME` directory.\n\n```bash\npython -m ocrflux.jsonl_to_markdown ./localworkspace\n```\n\n### Full documentation for the pipeline\n\n```bash\npython -m ocrflux.pipeline --help\nusage: pipeline.py [-h] [--task {pdf2markdown,merge_pages,merge_tables}] [--data [DATA ...]] [--pages_per_group PAGES_PER_GROUP] [--max_page_retries MAX_PAGE_RETRIES]\n                   [--max_page_error_rate MAX_PAGE_ERROR_RATE] [--workers WORKERS] [--model MODEL] [--model_max_context MODEL_MAX_CONTEXT] [--model_chat_template MODEL_CHAT_TEMPLATE]\n                   [--target_longest_image_dim TARGET_LONGEST_IMAGE_DIM] [--skip_cross_page_merge] [--port PORT]\n                   workspace\n\nManager for running millions of PDFs through a batch inference pipeline\n\npositional arguments:\n  workspace             The filesystem path where work will be stored, can be a local folder\n\noptions:\n  -h, --help            show this help message and exit\n  --data [DATA ...]     List of paths to files to process\n  --pages_per_group PAGES_PER_GROUP\n                        Aiming for this many pdf pages per work item group\n  --max_page_retries MAX_PAGE_RETRIES\n                        Max number of times we will retry rendering a page\n  --max_page_error_rate MAX_PAGE_ERROR_RATE\n                        Rate of allowable failed pages in a document, 1/250 by default\n  --workers WORKERS     Number of workers to run at a time\n  --model MODEL         The path to the model\n  --model_max_context MODEL_MAX_CONTEXT\n                        Maximum context length that the model was fine tuned under\n  --model_chat_template MODEL_CHAT_TEMPLATE\n                        Chat template to pass to vllm server\n  --target_longest_image_dim TARGET_LONGEST_IMAGE_DIM\n                        Dimension on longest side to use for rendering the pdf pages\n  --skip_cross_page_merge\n                        Whether to skip cross-page merging\n  --port PORT           Port to use for the VLLM server\n```\n\n## Code overview\n\nThere are some nice reusable pieces of the code that may be useful for your own projects:\n - Processing millions of PDFs through our released model using VLLM - [pipeline.py](https://github.com/chatdoc-com/OCRFlux/blob/main/ocrflux/pipeline.py)\n - Generating final Markdowns from jsonl files - [jsonl_to_markdown.py](https://github.com/chatdoc-com/OCRFlux/blob/main/ocrflux/jsonl_to_markdown.py)\n - Evaluating the model on the single-page parsing task - [eval_page_to_markdown.py](https://github.com/chatdoc-com/OCRFlux/blob/main/eval/eval_page_to_markdown.py)\n - Evaluating the model on the table parising task - [eval_table_to_html.py](https://github.com/chatdoc-com/OCRFlux/blob/main/eval/eval_table_to_html.py)\n - Evaluating the model on the paragraphs/tables merging detection task - [eval_element_merge_detect.py](https://github.com/chatdoc-com/OCRFlux/blob/main/eval/eval_element_merge_detect.py)\n - Evaluating the model on the table merging task - [eval_html_table_merge.py](https://github.com/chatdoc-com/OCRFlux/blob/main/eval/eval_html_table_merge.py)\n\n\n## Team\n\n\u003c!-- start team --\u003e\n\n**OCRFlux** is developed and maintained by the ChatDOC team, backed by [ChatDOC](https://chatdoc.com/).\n\n\u003c!-- end team --\u003e\n\n## License\n\n\u003c!-- start license --\u003e\n\n**OCRFlux** is licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).\nA full copy of the license can be found [on GitHub](https://github.com/allenai/OCRFlux/blob/main/LICENSE).\n\n\u003c!-- end license --\u003e\n","funding_links":[],"categories":["Python","光学字符识别OCR","数据 Data","📄 OCR Model Zoo"],"sub_categories":["资源传输下载"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchatdoc-com%2FOCRFlux","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fchatdoc-com%2FOCRFlux","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchatdoc-com%2FOCRFlux/lists"}