Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://rajpurkar.github.io/SQuAD-explorer/
Visually Explore the Stanford Question Answering Dataset
https://rajpurkar.github.io/SQuAD-explorer/
dataset leaderboard visual-analysis
Last synced: about 1 month ago
JSON representation
Visually Explore the Stanford Question Answering Dataset
- Host: GitHub
- URL: https://rajpurkar.github.io/SQuAD-explorer/
- Owner: rajpurkar
- License: mit
- Created: 2016-08-23T07:57:52.000Z (over 8 years ago)
- Default Branch: master
- Last Pushed: 2023-10-13T20:58:11.000Z (about 1 year ago)
- Last Synced: 2024-08-01T09:24:16.546Z (4 months ago)
- Topics: dataset, leaderboard, visual-analysis
- Language: JavaScript
- Homepage: https://rajpurkar.github.io/SQuAD-explorer/
- Size: 51.2 MB
- Stars: 543
- Watchers: 30
- Forks: 116
- Open Issues: 10
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- License: LICENSE
Awesome Lists containing this project
- Reading-Comprehension-Question-Answering-Papers - paper
- awesome-llm - SQuAD 2.0
- awesome-llm-eval - SQUAD
- awesome-nlp - SQuAD (Stanford Question Answering Dataset) - A dataset for reading comprehension and question answering tasks. (Datasets)
- awesome-nlp - SQuAD (Stanford Question Answering Dataset) - A dataset for reading comprehension and question answering tasks. (Datasets)
README
# SQuAD-explorer
The [Stanford Question Answering Dataset](https://stanford-qa.com](https://rajpurkar.github.io/SQuAD-explorer/)) is a large reading comprehension dataset.
This repository is intended to let people explore the dataset and visualize model predictions.This website is hosted on the [gh-pages branch](https://github.com/rajpurkar/SQuAD-explorer/tree/gh-pages).
## Testing models on your own data
Here are instructions for generating predictions of a model from the SQuAD leaderboard on custom data. This is done through [CodaLab Worksheets](https://worksheets.codalab.org/).1. Get the CodaLab UUID for the model you want to run by clicking on its name on the SQuAD leaderboard. For instance, clicking on the original BERT model submitted by Google AI for SQuAD 2.0 takes you to [https://worksheets.codalab.org/bundles/0xbe9df0807151427f92fc306189b6d63e](https://worksheets.codalab.org/bundles/0xbe9df0807151427f92fc306189b6d63e), which tells you that `0xbe9df0807151427f92fc306189b6d63e` is the CodaLab UUID for this submission.
2. Upload your dataset to CodaLab.
3. Use `cl mimic` to mimic the model:
```
cl mimic
```The official SQuAD development set UUIDs are:
* `0x8f29fe78ffe545128caccab74eb06c57` for SQuAD 1.1
* `0xb30d937a18574073903bb38b382aab03` for SQuAD 2.0