{"id":17526882,"url":"https://github.com/allenai/papermage","last_synced_at":"2025-10-13T15:57:02.233Z","repository":{"id":185711660,"uuid":"588822189","full_name":"allenai/papermage","owner":"allenai","description":"library supporting NLP and CV research on scientific papers","archived":false,"fork":false,"pushed_at":"2024-04-05T22:29:33.000Z","size":51184,"stargazers_count":668,"open_issues_count":27,"forks_count":52,"subscribers_count":9,"default_branch":"main","last_synced_at":"2024-09-16T02:37:50.569Z","etag":null,"topics":["computer-vision","machine-learning","multimodal","natural-language-processing","pdf-processing","python","scientific-papers"],"latest_commit_sha":null,"homepage":"https://papermage.org","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/allenai.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-01-14T05:49:08.000Z","updated_at":"2024-09-14T19:00:39.000Z","dependencies_parsed_at":"2024-11-08T02:17:45.690Z","dependency_job_id":"776f5319-81a2-47ea-9751-b2339e65441d","html_url":"https://github.com/allenai/papermage","commit_stats":null,"previous_names":["allenai/papermage"],"tags_count":3,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/allenai%2Fpapermage","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/allenai%2Fpapermage/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/allenai%2Fpapermage/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/allenai%2Fpapermage/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/allenai","download_url":"https://codeload.github.com/allenai/papermage/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":242161458,"owners_count":20081876,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["computer-vision","machine-learning","multimodal","natural-language-processing","pdf-processing","python","scientific-papers"],"created_at":"2024-10-20T15:02:35.553Z","updated_at":"2025-10-13T15:57:02.228Z","avatar_url":"https://github.com/allenai.png","language":"Python","readme":"# papermage\n\n⚠️ This project is a research prototype for EMNLP 2023. Due to other project priorities, we are unlikely to be addressing issues / maintaining this on a regular cadence. We are working on related scientific PDF parsing functionality under the [Dolma](https://github.com/allenai/dolma) project banner, so please keep an eye there for a new release on the horizon. Thanks!\n\n### Setup\n\n```python\nconda create -n papermage python=3.11\nconda activate papermage\n```\n\nIf you're installing from source:\n```\npip install -e '.[dev,predictors,visualizers]'\n```\n\nIf you're installing from PyPi:\n```\npip install 'papermage[dev,predictors,visualizers]'\n```\n\n(you may need to add/remove quotes depending on your command line shell).\n\n\nIf you're on MacOSX, you'll also want to run:\n```\nconda install poppler\n```\n\n\n## Unit testing\n```bash\npython -m pytest\n```\nfor latest failed test\n```bash\npython -m pytest --lf --no-cov -n0\n```\nfor specific test name of class name\n```bash\npython -m pytest -k 'TestPDFPlumberParser' --no-cov -n0\n```\n\n## Quick start\n\n#### 1. Create a Document for the first time from a PDF\n```\nfrom papermage.recipes import CoreRecipe\n\nrecipe = CoreRecipe()\ndoc = recipe.run(\"tests/fixtures/papermage.pdf\")\n```\n\n#### 2. Understanding the output: the `Document` class\n\nWhat is a `Document`? At minimum, it is some text, saved under the `.symbols` layer, which is just a `\u003cstr\u003e`.  For example:\n\n```python\n\u003e doc.symbols\n\"PaperMage: A Unified Toolkit for Processing, Representing, and\\nManipulating Visually-...\"\n```\n\nBut this library is really useful when you have multiple different ways of segmenting `.symbols`. For example, segmenting the paper into Pages, and then each page into Rows:\n\n```python\nfor page in doc.pages:\n    print(f'\\n=== PAGE: {page.id} ===\\n\\n')\n    for row in page.rows:\n        print(row.text)\n        \n...\n=== PAGE: 5 ===\n\n4\nVignette: Building an Attributed QA\nSystem for Scientific Papers\nHow could researchers leverage papermage for\ntheir research? Here, we walk through a user sce-\nnario in which a researcher (Lucy) is prototyping\nan attributed QA system for science.\nSystem Design.\nDrawing inspiration from Ko\n...\n```\n\nThis shows two nice aspects of this library:\n\n* `Document` provides iterables for different segmentations of `symbols`.  Options include things like `pages, tokens, rows, sentences, sections, ...`.  Not every Parser will provide every segmentation, though.\n\n* Each one of these segments (in our library, we call them `Entity` objects) is aware of (and can access) other segment types. For example, you can call `page.rows` to get all Rows that intersect a particular Page. Or you can call `sent.tokens` to get all Tokens that intersect a particular Sentence. Or you can call `sent.rows` to get the Row(s) that intersect a particular Sentence. These indexes are built *dynamically* when the `Document` is created and each time a new `Entity` type is added. In the extreme, as long as those layers are available in the Document, you can write:\n\n```python\nfor page in doc.pages:\n    for sent in page.sentences:\n        for row in sent.rows: \n            ...\n```\n\nYou can check which layers are available in a Document via:\n\n```python\n\u003e doc.layers\n['tokens',\n 'rows',\n 'pages',\n 'words',\n 'sentences',\n 'blocks',\n 'vila_entities',\n 'titles',\n 'authors',\n 'abstracts',\n 'keywords',\n 'sections',\n 'lists',\n 'bibliographies',\n 'equations',\n 'algorithms',\n 'figures',\n 'tables',\n 'captions',\n 'headers',\n 'footers',\n 'footnotes',\n 'symbols',\n 'images',\n 'metadata',\n 'entities',\n 'relations']\n```\n\n#### 3. Understanding intersection of Entities\n\nNote that `Entity`s don't necessarily perfectly nest each other. For example, what happens if you run:\n\n```python\nfor sent in doc.sentences:\n    for row in sent.rows:\n        print([token.text for token in row.tokens])\n```\n\nTokens that are *outside* each sentence can still be printed. This is because when we jump from a sentence to its rows, we are looking for *all* rows that have *any* overlap with the sentence. Rows can extend beyond sentence boundaries, and as such, can contain tokens outside that sentence.\n\nA key aspect of using this library is understanding how these different layers are defined \u0026 anticipating how they might interact with each other. We try to make decisions that are intuitive, but we do ask users to experiment with layers to build up familiarity.\n\n\n\n#### 4. What's in an `Entity`?\n\nEach `Entity` object stores information about its contents and position:\n\n* `.spans: List[Span]`, A `Span` is a pointer into `Document.symbols` (that is, `Span(start=0, end=5)` corresponds to `symbols[0:5]`). By default, when you iterate over an `Entity`, you iterate over its `.spans`.\n\n* `.boxes: List[Box]`, A `Box` represents a rectangular region on the page. Each span is associated a Box.\n\n* `.metadata: Metadata`, A free form dictionary-like object to store extra metadata about that `Entity`. These are usually empty.\n\n\n\n#### 5. How can I manually create my own `Document`?\n\nA `Document` is created by stitching together 3 types of tools: `Parsers`, `Rasterizers` and `Predictors`.\n\n* `Parsers` take a PDF as input and return a `Document` compared of `.symbols` and other layers. The example one we use is a wrapper around [PDFPlumber](https://github.com/jsvine/pdfplumber) - MIT License utility.\n\n* `Rasterizers` take a PDF as input and return an `Image` per page that is added to `Document.images`. The example one we use is [PDF2Image](https://github.com/Belval/pdf2image) - MIT License. \n\n* `Predictors` take a `Document` and apply some operation to compute a new set of `Entity` objects that we can insert into our `Document`. These are all built in-house and can be either simple heuristics or full machine-learning models.\n\n\n\n#### 6. How can I save my `Document`?\n\n```python\nimport json\nwith open('filename.json', 'w') as f_out:\n    json.dump(doc.to_json(), f_out, indent=4)\n```\n\nwill produce something akin to:\n```python\n{\n    \"symbols\": \"PaperMage: A Unified Toolkit for Processing, Representing, an...\",\n    \"entities\": {\n        \"rows\": [...],\n        \"tokens\": [...],\n        \"words\": [...],\n        \"blocks\": [...],\n        \"sentences\": [...]\n    },\n    \"metadata\": {...}\n}\n```\n\n\n#### 7. How can I load my `Document`?\n\nThese can be used to reconstruct a `Document` again via:\n\n```python\nwith open('filename.json') as f_in:\n    doc_dict = json.load(f_in)\n    doc = Document.from_json(doc_dict)\n```\n\n\nNote: A common pattern for adding layers to a document is to load in a previously saved document, run some additional `Predictors` on it, and save the result.\n\nSee `papermage/predictors/README.md` for more information about training custom predictors on your own data.\n\nSee `papermage/examples/quick_start_demo.ipynb` for a notebook walking through some more usage patterns.\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fallenai%2Fpapermage","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fallenai%2Fpapermage","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fallenai%2Fpapermage/lists"}