Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/titipata/scipdf_parser
Python PDF parser for scientific publications: content and figures
https://github.com/titipata/scipdf_parser
grobid parser pdf pdf-parser python-parser scipdf-parser
Last synced: 1 day ago
JSON representation
Python PDF parser for scientific publications: content and figures
- Host: GitHub
- URL: https://github.com/titipata/scipdf_parser
- Owner: titipata
- License: mit
- Created: 2019-07-03T16:50:01.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2024-03-21T09:30:04.000Z (10 months ago)
- Last Synced: 2024-12-29T19:16:57.925Z (8 days ago)
- Topics: grobid, parser, pdf, pdf-parser, python-parser, scipdf-parser
- Language: Python
- Homepage:
- Size: 29.2 MB
- Stars: 378
- Watchers: 8
- Forks: 60
- Open Issues: 15
-
Metadata Files:
- Readme: README.md
- License: LICENSE.txt
Awesome Lists containing this project
README
# SciPDF Parser
A Python parser for scientific PDF based on [GROBID](https://github.com/kermitt2/grobid).
## Installation
Use `pip` to install from this Github repository
```bash
pip install git+https://github.com/titipata/scipdf_parser
```**Note**
* We also need an `en_core_web_sm` model for spacy, where you can run `python -m spacy download en_core_web_sm` to download it
* You can change GROBID version in `serve_grobid.sh` to test the parser on a new GROBID version## Usage
Run the GROBID using the given bash script before parsing PDF.
**NOTE**: the recommended way to [run grobid is via docker](https://grobid.readthedocs.io/en/latest/Grobid-docker/), so make sure it's running on your machine. Update the script so that you are using [latest version](https://github.com/kermitt2/grobid/releases). Generally, at every version there are substantial improvements.
```bash
bash serve_grobid.sh
```This script will run GROBID at default port 8070 (see more [here](https://grobid.readthedocs.io/en/latest/Grobid-service/)).
To parse a PDF provided in `example_data` folder or direct URL, use the following function:```python
import scipdf
article_dict = scipdf.parse_pdf_to_dict('example_data/futoma2017improved.pdf') # return dictionary
# option to parse directly from URL to PDF, if as_list is set to True, output 'text' of parsed section will be in a list of paragraphs instead
article_dict = scipdf.parse_pdf_to_dict('https://www.biorxiv.org/content/biorxiv/early/2018/11/20/463760.full.pdf', as_list=False)# output example
>> {
'title': 'Proceedings of Machine Learning for Healthcare',
'abstract': '...',
'sections': [
{'heading': '...', 'text': '...'},
{'heading': '...', 'text': '...'},
...
],
'references': [
{'title': '...', 'year': '...', 'journal': '...', 'author': '...'},
...
],
'figures': [
{'figure_label': '...', 'figure_type': '...', 'figure_id': '...', 'figure_caption': '...', 'figure_data': '...'},
...
],
'doi': '...'
}xml = scipdf.parse_pdf('example_data/futoma2017improved.pdf', soup=True) # option to parse full XML from GROBID
```To parse figures from PDF using [pdffigures2](https://github.com/allenai/pdffigures2), you can run
```python
scipdf.parse_figures('example_data', output_folder='figures') # folder should contain only PDF files
```You can see example output figures in `figures` folder.