Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://rajpurkar.github.io/SQuAD-explorer/
Visually Explore the Stanford Question Answering Dataset
https://rajpurkar.github.io/SQuAD-explorer/
dataset leaderboard visual-analysis
Last synced: 3 months ago
JSON representation
Visually Explore the Stanford Question Answering Dataset
- Host: GitHub
- URL: https://rajpurkar.github.io/SQuAD-explorer/
- Owner: rajpurkar
- License: mit
- Created: 2016-08-23T07:57:52.000Z (almost 8 years ago)
- Default Branch: master
- Last Pushed: 2023-10-13T20:58:11.000Z (8 months ago)
- Last Synced: 2024-03-23T19:43:57.356Z (3 months ago)
- Topics: dataset, leaderboard, visual-analysis
- Language: JavaScript
- Homepage: https://rajpurkar.github.io/SQuAD-explorer/
- Size: 51.2 MB
- Stars: 533
- Watchers: 30
- Forks: 114
- Open Issues: 10
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- License: LICENSE
Lists
- awesome-deep-learning-resources - SQuAD The Stanford Question Answering Dataset - Question answering dataset that can be explored online, and a list of models performing well on that dataset. (Practical Resources / Some Datasets)
- awesome-deep-learning - SQuAD - Stanford released ~100,000 English QA pairs and ~50,000 unanswerable questions (Researchers / Datasets)
- awesome-question-answering - Stanford Question Answering Dataset
- awesome-llm - SQuAD 2.0
- awesome-llm-eval - SQUAD
- awesome-nlp-resource - SQuAD
- AwesomeMRC - SQuAD 2.0
- text_mining_resources - SQuAD leaderboard - performing NLP models on the Stanford Question Answering Dataset (SQuAD). (Benchmarks / Knowledge Graphs)
- awesome-question-answering-dataset - SQuAD - Eng
- awesome-azure-openai-llm - SQuAD - answer pairs on 500+ articles. [16 Jun 2016] (**Section 11: Datasets for LLM Training** / **Awesome demo**)
- Reading-Comprehension-Question-Answering-Papers - paper
- awesome-knowledge-graph - https://rajpurkar.github.io/SQuAD-explorer/
- awesome-web-resources - The Stanford Question Answering Dataset - answer pairs on 500+ articles, SQuAD is significantly larger than previous reading comprehension datasets. (Academic / Data)
- awesome-knowledge-graph-master - https://rajpurkar.github.io/SQuAD-explorer/
- awesome-knowledge-graph - https://rajpurkar.github.io/SQuAD-explorer/
- awesome-qa - SQuAD2.0
README
# SQuAD-explorer
The [Stanford Question Answering Dataset](https://stanford-qa.com](https://rajpurkar.github.io/SQuAD-explorer/)) is a large reading comprehension dataset.
This repository is intended to let people explore the dataset and visualize model predictions.This website is hosted on the [gh-pages branch](https://github.com/rajpurkar/SQuAD-explorer/tree/gh-pages).
## Testing models on your own data
Here are instructions for generating predictions of a model from the SQuAD leaderboard on custom data. This is done through [CodaLab Worksheets](https://worksheets.codalab.org/).1. Get the CodaLab UUID for the model you want to run by clicking on its name on the SQuAD leaderboard. For instance, clicking on the original BERT model submitted by Google AI for SQuAD 2.0 takes you to [https://worksheets.codalab.org/bundles/0xbe9df0807151427f92fc306189b6d63e](https://worksheets.codalab.org/bundles/0xbe9df0807151427f92fc306189b6d63e), which tells you that `0xbe9df0807151427f92fc306189b6d63e` is the CodaLab UUID for this submission.
2. Upload your dataset to CodaLab.
3. Use `cl mimic` to mimic the model:
```
cl mimic
```The official SQuAD development set UUIDs are:
* `0x8f29fe78ffe545128caccab74eb06c57` for SQuAD 1.1
* `0xb30d937a18574073903bb38b382aab03` for SQuAD 2.0