Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/explosion/prodigy-recipes
🍳 Recipes for the Prodigy, our fully scriptable annotation tool
https://github.com/explosion/prodigy-recipes
active-learning annotation annotation-tool artificial-intelligence computer-vision data-annotation data-science labeling-tool machine-learning machine-teaching natural-language-processing nlp prodigy spacy
Last synced: 6 days ago
JSON representation
🍳 Recipes for the Prodigy, our fully scriptable annotation tool
- Host: GitHub
- URL: https://github.com/explosion/prodigy-recipes
- Owner: explosion
- Created: 2017-12-09T12:57:34.000Z (about 7 years ago)
- Default Branch: master
- Last Pushed: 2024-08-04T07:30:50.000Z (6 months ago)
- Last Synced: 2025-01-18T03:03:15.993Z (13 days ago)
- Topics: active-learning, annotation, annotation-tool, artificial-intelligence, computer-vision, data-annotation, data-science, labeling-tool, machine-learning, machine-teaching, natural-language-processing, nlp, prodigy, spacy
- Language: Jupyter Notebook
- Homepage: https://prodi.gy
- Size: 15.3 MB
- Stars: 485
- Watchers: 26
- Forks: 116
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Prodigy Recipes
This repository contains a collection of recipes for
[Prodigy](https://prodi.gy), our scriptable annotation tool for text, images and
other data. In order to use this repo, you'll need a license for Prodigy –
[see this page](https://prodi.gy/buy) for more details. For questions and bug
reports, please use the [Prodigy Support Forum](https://support.prodi.gy). If
you've found a mistake or bug, feel free to submit a
[pull request](https://github.com/explosion/prodigy-recipes/pulls).> ✨ **Important note:** The recipes in this repository aren't 100% identical to
> the built-in recipes shipped with Prodigy. They've been edited to include
> comments and more information, and some of them have been simplified to make
> it easier to follow what's going on, and to use them as the basis for a custom
> recipe.## 📋 Usage
Once Prodigy is installed, you should be able to run the `prodigy` command from
your terminal, either directly or via `python -m`:```bash
python -m prodigy
```The `prodigy` command lists the built-in recipes. To use a custom recipe script,
simply pass the path to the file using the `-F` argument:```bash
python -m prodigy ner.teach your_dataset en_core_web_sm ./data.jsonl --label PERSON -F prodigy-recipes/ner/ner_teach.py
```You can also use the `--help` flag for an overview of the available arguments of a recipe, e.g. `prodigy ner.teach -F ner_teach_.py --help`.
### Some things to try
You can edit the code in the recipe script to customize how Prodigy behaves.
- Try replacing `prefer_uncertain()` with `prefer_high_scores()`.
- Try writing a custom sorting function. It just needs to be a generator that
yields a sequence of `example` dicts, given a sequence of `(score, example)`
tuples.
- Try adding a filter that drops some questions from the stream. For instance,
try writing a filter that only asks you questions where the entity is two
words long.
- Try customizing the `update()` callback, to include extra logging or extra
functionality.## 🍳 Recipes
### Named Entity Recognition
| Recipe | Description |
| ---------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [`ner.teach`](ner/ner_teach.py) | Collect the best possible training data for a named entity recognition model with the model in the loop. Based on your annotations, Prodigy will decide which questions to ask next. |
| [`ner.match`](ner/ner_match.py) | Suggest phrases that match a given patterns file, and mark whether they are examples of the entity you're interested in. The patterns file can include exact strings or token patterns for use with spaCy's `Matcher`. |
| [`ner.manual`](ner/ner_manual.py) | Mark spans manually by token. Requires only a tokenizer and no entity recognizer, and doesn't do any active learning. Optionally, pre-highlight spans based on patterns.
| [`ner.fuzzy_manual`](ner/ner_fuzzy_manual.py) | Like `ner.manual` but use `FuzzyMatcher` from [`spaczz`](https://github.com/gandersen101/spaczz) library to pre-highlight candidates. |
| [`ner.manual.bert`](other/transformers_tokenizers.py) | Use BERT word piece tokenizer for efficient manual NER annotation for transformer models. |
| [`ner.correct`](ner/ner_correct.py) | Create gold-standard data by correcting a model's predictions manually. This recipe used to be called [`ner.make_gold`](ner/ner_make_gold.py). |
| [`ner.silver-to-gold`](ner/ner_silver_to_gold.py) | Take an existing "silver" dataset with binary accept/reject annotations, merge the annotations to find the best possible analysis given the constraints defined in the annotations, and manually edit it to create a perfect and complete "gold" dataset. |
| [`ner.eval_ab`](ner/ner_eval_ab.py) | Evaluate two NER models by comparing their predictions and building an evaluation set from the stream. |
| [`ner_fuzzy_manual`](ner/ner_fuzzy_manual.py) | Mark spans manually by token with suggestions from [`spaczz fuzzy`](https://spacy.io/universe/project/spaczz) matcher pre-highlighted.### Text Classification
| Recipe | Description |
| --------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [`textcat.manual`](textcat/textcat_manual.py) | Manually annotate categories that apply to a text. Supports annotation tasks with single and multiple labels. Multiple labels can optionally be flagged as exclusive.|
| [`textcat.correct`](textcat/textcat_correct.py) | Correct the textcat model's predictions manually. Predictions above the acceptance threshold will be automatically preselected (0.5 by default). Prodigy will infer whether the categories should be mutualy exclusive based on the component configuration. |
| [`textcat.teach`](textcat/textcat_teach.py) | Collect the best possible training data for a text classification model with the model in the loop. Based on your annotations, Prodigy will decide which questions to ask next.|
| [`textcat.custom-model`](textcat/textcat_custom_model.py) | Use active learning-powered text classification with a custom model. To demonstrate how it works, this demo recipe uses a simple dummy model that "predicts" random scores. But you can swap it out for any model of your choice, for example a text classification model implementation using PyTorch, TensorFlow or scikit-learn. |### Terminology
| Recipe | Description |
| ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [`terms.teach`](terms/terms_teach.py) | Bootstrap a terminology list with word vectors and seeds terms. Prodigy will suggest similar terms based on the word vectors, and update the target vector accordingly. |### Image
| Recipe | Description |
| ----------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [`image.manual`](image/image_manual.py) | Manually annotate images by drawing rectangular bounding boxes or polygon shapes on the image. |
| [`image-caption`](image/image_caption/image_caption.py) | Annotate images with captions, pre-populate captions with image captioning model implemented in PyTorch and perform error analysis. |
| [`image.frozenmodel`](image/tf_odapi/image_frozen_model.py) | Model in loop manual annotation using [Tensorflow's Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). |
| [`image.servingmodel`](image/tf_odapi/image_tf_serving.py) | Model in loop manual annotation using [Tensorflow's Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This uses [Tensorflow Serving](https://www.tensorflow.org/tfx/guide/serving) |
| [`image.trainmodel`](image/tf_odapi/image_train.py) | Model in loop manual annotation and training using [Tensorflow's Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). |### Other
| Recipe | Description |
| --------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| [`mark`](other/mark.py) | Click through pre-prepared examples, with no model in the loop. |
| [`choice`](other/choice.py) | Annotate data with multiple-choice options. The annotated examples will have an additional property `"accept": []` mapping to the ID(s) of the selected option(s). |
| [`question_answering`](other/question_answering.py) | Annotate question/answer pairs with a custom HTML interface. |### Community recipes
| Recipe | Author | Description |
| -------------------------------- | ---------- | ------------------------------------------------------------------------------------------------------- |
| `phrases.teach` | @kabirkhan | Now part of [`sense2vec`](https://github.com/explosion/sense2vec). |
| `phrases.to-patterns` | @kabirkhan | Now part of [`sense2vec`](https://github.com/explosion/sense2vec). |
| [`records.link`](contrib/dedupe) | @kabirkhan | Link records across multiple datasets using the [`dedupe`](https://github.com/dedupeio/dedupe) library. |### Tutorial recipes
These recipes have made an appearance in one of our tutorials.
| Recipe | Description |
| ----------------------------------------------------------- | --------------------------------------------------------------------------------------------- |
| [`span-and-textcat`](tutorials/span-and-textcat/) | Do both spancat and textcat annotations at the same time. Great for chatbots! |
| [`terms.from-ner`](tutorials/terms-from-ner/) | Generate terms from previous NER annotations. |
| [`audio-with-transcript`](tutorials/audio-with-transcript/) | Handles both manual audio annotation as well as transcription. |
| [`progress`](tutorials/progress-update) | Demo of an `update`-callback that tracks annotation speed. |## 📚 Example Datasets and Patterns
To make it even easier to get started, we've also included a few
[`example-datasets`](example-datasets), both raw data as well as data containing
annotations created with Prodigy. For examples of token-based match patterns to
use with recipes like `ner.teach` or `ner.match`, see the
[`example-patterns`](example-patterns) directory.