Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/nayeon7lee/bert-summarization
https://github.com/nayeon7lee/bert-summarization
Last synced: 6 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/nayeon7lee/bert-summarization
- Owner: nayeon7lee
- Created: 2019-04-12T14:41:48.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2022-12-08T04:58:34.000Z (almost 2 years ago)
- Last Synced: 2024-06-07T16:48:00.079Z (5 months ago)
- Language: Python
- Size: 51.8 KB
- Stars: 120
- Watchers: 8
- Forks: 32
- Open Issues: 17
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-bert - nayeon7lee/bert-summarization - Based Natural Language Generation for Text Summarization', Paper: https://arxiv.org/pdf/1902.09243.pdf (BERT Text Summarization Task:)
README
## Implementation of 'Pretraining-Based Natural Language Generation for Text Summarization'
Paper: https://arxiv.org/pdf/1902.09243.pdf
### Versions
* python 2.7
* PyTorch: 1.0.1.post2### Preparing package/dataset
0. Run: `pip install -r requirements.txt` to install required packages
1. Download chunk CNN/DailyMail data from: https://github.com/JafferWilson/Process-Data-of-CNN-DailyMail
2. Run: `python news_data_reader.py` to create pickle file that will be used in my data-loader### Running the model
For me, the model was too big for my GPU, so I used smaller parameters as following for debugging purpose.
`CUDA_VISIBLE_DEVICES=3 python main.py --cuda --batch_size=2 --hop 4 --hidden_dim 100`### Note to reviewer:
* Although I implemented the core-part (2-step summary generation using BERT), I didn't have enough time to implement RL section.
* The 2nd decoder process is very time-consuming (since it needs to create BERT context vector for each timestamp).