Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/proycon/babelente
BabelEnte: Entity Extractor and Translator using BabelFy and Babelnet.org
https://github.com/proycon/babelente
babelfy babelnet computational-linguistics nlp
Last synced: 24 days ago
JSON representation
BabelEnte: Entity Extractor and Translator using BabelFy and Babelnet.org
- Host: GitHub
- URL: https://github.com/proycon/babelente
- Owner: proycon
- Created: 2017-10-13T19:27:49.000Z (about 7 years ago)
- Default Branch: master
- Last Pushed: 2019-06-04T08:43:46.000Z (over 5 years ago)
- Last Synced: 2024-10-03T10:29:05.170Z (about 1 month ago)
- Topics: babelfy, babelnet, computational-linguistics, nlp
- Language: Python
- Size: 3.8 MB
- Stars: 4
- Watchers: 3
- Forks: 2
- Open Issues: 1
-
Metadata Files:
- Readme: README.rst
Awesome Lists containing this project
README
BabelEnte: Entity extractioN, Translation and implicit Evaluation using BabelFy
===================================================================================This is an entity extractor, translator and evaluator that uses `BabelFy `_ . Initially developed
for the TraMOOC project. It is written in Python 3... image:: https://github.com/proycon/babelente/blob/master/logo.jpg?raw=true
:align: centerInstallation
---------------::
pip3 install babelente
or clone this github repository and run ``python3 setup.py install``, optionally prepend the commands with ``sudo`` for
global installation.Usage
-------You will need a BabelFy API key, get it from `BabelNet.org `_ .
See ``babelente -h`` for extensive usage instructions, explaining all the options.
For simple entity recognition/linking on plain text documents, invoke BabelEnte as follows. This will produce JSON output with all entities found:
``$ babelente -k "YOUR-API-KEY" -s en -S sentences.en.txt > output.json``
BabelEnte comes with `FoLiA `_ support. Allowing you to read FoLiA documents and
producing enriched FoLiA documents that include the detected/linked entities. To this end, simply specify the language
of your FoLiA document(s) and pass them to babelente as follows, multiple documents are allowed:``$ babelente -k "YOUR-API-KEY" -s en yourdocument.folia.xml``
Each FoLiA document will be outputted to a new file, which includes all the entities. Entities will be explicitly linked to BabelNet
and DBpedia where possible. At the same time, the ``stdout`` output again consists of a JSON object containing all found
entities.Note that this method does currently not do any translation of entities yet (I'm open to feature request
if you want this).If you start from plain text but want to produce FoLiA output, then first use for instance `ucto
`_ to tokenise your document and convert it to FoLiA, prior to passing it to
BabelEnte.Usage for TraMOOC
--------------------This sofware can be used for implicit evaluation of translations, as it was designed in the scope of the TraMOOC
project.To evaluate a translation (english to portuguese in this example), output wil be JSON to stdout:
``$ babelente -k "YOUR-API-KEY" -s en -t pt -S sentences.en.txt -T sentences.pt.txt > output.json``
To re-evaluate:
``$ babelente --evalfile output.json -S sentences.en.txt -T sentences.pt.txt > newoutput.json``
Evaluation
~~~~~~~~~~~~~The evaluation produces several metrics.
* source coverage number of characters covered by found source entities divided by the total number of characters in the source text
* target coverage number of characters covered by found target entities divided by the total number of characters in the target textPrecision and Recall
~~~~~~~~~~~~~~~~~~~~~~In the standard scoring method we count each entity and compute scores
We also implemented the option to compute the scores* **micro precision** sum of found equivalent entities in target and source texts divided by the total sum of found entities in target language
* **macro precision** sum of found equivalent entities in target and source texts divided by the number of target sentences
* **micro recall** sum of found equivalent entities in target and source divided by the total sum of found entities in source language for which a equivalent link existed in the target language. In other words, how many of the hypothetical possible matches that were found?
Note that this is intensive computation and needs to be specified as command line parameter —recall.
* **macro recall** sum of found equivalent entities in target and source texts divided by the number of source sentences.**Computing recall and precision over entity sets**
Instead of counting every occurring entity (“tokens”), we can also count each entity once (“types” or “sets”). This can be a more useful indicator of the performance measure when the input texts contains many repetitions or slight variations of the same sentences.
This option is activated with the parameter —nodup (no duplicates) .License
-----------GNU - GPL 3.0