https://github.com/jonathanraiman/wikipedia_ner
:book: Labeled examples from wiki dumps in Python
https://github.com/jonathanraiman/wikipedia_ner
dataset named-entity-recognition python text-extraction wikipedia
Last synced: 6 months ago
JSON representation
:book: Labeled examples from wiki dumps in Python
- Host: GitHub
- URL: https://github.com/jonathanraiman/wikipedia_ner
- Owner: JonathanRaiman
- Created: 2014-10-27T18:27:36.000Z (about 11 years ago)
- Default Branch: master
- Last Pushed: 2016-08-08T05:06:39.000Z (over 9 years ago)
- Last Synced: 2024-09-15T12:49:31.279Z (over 1 year ago)
- Topics: dataset, named-entity-recognition, python, text-extraction, wikipedia
- Language: Jupyter Notebook
- Homepage:
- Size: 108 KB
- Stars: 68
- Watchers: 4
- Forks: 7
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
Wikipedia NER
-------------
Tool to train and obtain named entity recognition labeled examples
from Wikipedia dumps.
Usage in [IPython notebook](http://nbviewer.ipython.org/github/JonathanRaiman/wikipedia_ner/blob/master/Wikipedia%20to%20Named%20Entity%20Recognition.ipynb) (*nbviewer* link).
## Usage
Here is an example usage with the first 200 articles from the english wikipedia dump (dated lated 2013):
parseresult = wikipedia_ner.parse_dump("enwiki.bz2",
max_articles = 200)
most_common_category = wikipedia_ner.ParsedPage.categories_counter.most_common(1)[0][0]
most_common_category_children = [
parseresult.index2target[child] for child in list(wikipedia_ner.ParsedPage.categories[most_common_category].children)
]
"In '%s' the children are %r" % (
most_common_category,
", ".join(most_common_category_children)
)
#=> "In 'Category : Member states of the United Nations' the children are 'Afghanistan, Algeria, Andorra, Antigua and Barbuda, Azerbaijan, Angola, Albania'"