{"id":13429783,"url":"https://github.com/jonathanherzig/commonsenseqa","last_synced_at":"2025-03-16T04:31:03.522Z","repository":{"id":201783968,"uuid":"175381487","full_name":"jonathanherzig/commonsenseqa","owner":"jonathanherzig","description":"Author implementation of the paper \"CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge\"","archived":false,"fork":false,"pushed_at":"2024-07-25T10:12:42.000Z","size":91,"stargazers_count":152,"open_issues_count":3,"forks_count":28,"subscribers_count":3,"default_branch":"master","last_synced_at":"2024-10-27T08:37:03.233Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://www.tau-nlp.org/commonsenseqa","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/jonathanherzig.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2019-03-13T08:49:41.000Z","updated_at":"2024-10-18T13:50:58.000Z","dependencies_parsed_at":null,"dependency_job_id":"71363b2b-9a10-489c-b0b0-c8254378c421","html_url":"https://github.com/jonathanherzig/commonsenseqa","commit_stats":null,"previous_names":["jonathanherzig/commonsenseqa"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jonathanherzig%2Fcommonsenseqa","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jonathanherzig%2Fcommonsenseqa/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jonathanherzig%2Fcommonsenseqa/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jonathanherzig%2Fcommonsenseqa/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/jonathanherzig","download_url":"https://codeload.github.com/jonathanherzig/commonsenseqa/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243826788,"owners_count":20354220,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-31T02:00:45.343Z","updated_at":"2025-03-16T04:31:03.163Z","avatar_url":"https://github.com/jonathanherzig.png","language":"Python","readme":"\n# CommonsenseQA\n\nA question answering dataset for commonsense reasoning.\n\nCheck out the [website][commonsense-qa-website]!\n\n\n## Downloading the Data\n\nYou can download the data from the [website][commonsense-qa-website],\nwhich also has an evaluation script. The\n[leaderboard][commonsense-qa-leaderboard] is for the `random` split of\nthe data.\n\n## Running ESIM\nOur implementation is based on [this code](https://github.com/rowanz/swagaf/tree/master/swag_baselines/esim). To run it, follow these steps:\n\n1. Install ESIM dependencies:\n    ```\n    cd esim\n    pip install -r requirements.txt\n    cd ..\n    ```\n2. Place the dataset in ```data/``` folder.\n3. Set PYTHONPATH to the `commonsenseqa` directory:\n```export PYTHONPATH=$(pwd)```\n4. Run the model either with pre-trained GloVe embeddings:\n   ```\n   python -m allennlp.run train esim/train-glove-csqa.json -s tmp --include-package esim\n    ```\n5. Alternatively, run the model with ELMo pretrained contextual embeddings:\n    ```\n    python -m allennlp.run train esim/train-elmo-csqa.json -s tmp --include-package esim\n    ```\n\n## Running BERT\n\nTo run BERT on CommonsenseQA, first install the BERT dependencies:\n\n    cd bert/\n    pip install -r requirements.txt\n\nThen, [obtain the CommonsenseQA data](#downloading-the-data), and\ndownload the [pretrained BERT weights][downloading-bert-weights]. For\nthe paper, we used [`BERT Large, Uncased`][bert-large-weights]. To train\nBERT Large, you'll most likely need to [use a TPU][tpu-info]; BERT base\ncan be trained on a standard GPU.\n\nTo run training:\n\n**GPU**\n\n    python run_commonsense_qa.py\n      --split=$SPLIT \\\n      --do_train=true \\\n      --do_eval=true \\\n      --data_dir=$DATA_DIR \\\n      --vocab_file=$BERT_DIR/vocab.txt \\\n      --bert_config_file=$BERT_DIR/bert_config.json \\\n      --init_checkpoint=$BERT_DIR/bert_model.ckpt \\\n      --max_seq_length=128 \\\n      --train_batch_size=16 \\\n      --learning_rate=2e-5 \\\n      --num_train_epochs=3.0 \\\n      --output_dir=$OUTPUT_DIR\n\n**TPU**\n\n    python run_commonsense_qa.py\n      --split=$SPLIT \\\n      --use_tpu=true \\\n      --tpu_name=$TPU_NAME \\\n      --do_train=true \\\n      --do_eval=true \\\n      --data_dir=$DATA_DIR \\\n      --vocab_file=$BERT_DIR/vocab.txt \\\n      --bert_config_file=$BERT_DIR/bert_config.json \\\n      --init_checkpoint=$BERT_DIR/bert_model.ckpt \\\n      --max_seq_length=128 \\\n      --train_batch_size=16 \\\n      --learning_rate=2e-5 \\\n      --num_train_epochs=3.0 \\\n      --output_dir=$OUTPUT_DIR\n\nFor TPUs, all directories must be in Google Storage. The environment\nvariables have the following meanings:\n\n  - `$SPLIT` should either be `rand` or `qtoken`, depending on the split\n    you'd like to run.\n  - `$DATA_DIR` is a location for the CommonsenseQA data.\n  - `$BERT_DIR` is a location for the pre-trained BERT files.\n  - `$TPU_NAME` is the name of the TPU.\n  - `$OUTPUT_DIR` is the directory to write output to.\n\nTo predict on the test set, run:\n\n**GPU (only)**\n\n    python run_commonsense_qa.py \\\n      --split=$SPLIT \\\n      --do_predict=true \\\n      --data_dir=$DATA_DIR \\\n      --vocab_file=$BERT_DIR/vocab.txt \\\n      --bert_config_file=$BERT_DIR/bert_config.json \\\n      --init_checkpoint=$TRAINED_CHECKPOINT \\\n      --max_seq_length=128 \\\n      --output_dir=$OUTPUT_DIR\n\nPrediction must be run on a GPU (including for BERT Large). All\nenvironment variables have the same meanings, and the new variable\n`$TRAINED_CHECKPOINT` is simply the prefix for your trained checkpoint\nfiles from fine-tuning BERT. It should look something like\n`$OUTPUT_DIR/model.ckpt-1830`.\n\n\n[bert-large-weights]: https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-24_H-1024_A-16.zip\n[commonsense-qa-leaderboard]: https://www.tau-nlp.org/csqa-leaderboard\n[commonsense-qa-website]: https://www.tau-nlp.org/commonsenseqa\n[downloading-bert-weights]: https://github.com/google-research/bert#pre-trained-models\n[tpu-info]: https://cloud.google.com/tpu/\n","funding_links":[],"categories":["3 Reasoning Tasks","Anthropomorphic-Taxonomy"],"sub_categories":["3.1 Commonsense Reasoning","Typical Intelligence Quotient (IQ)-General Intelligence evaluation benchmarks"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjonathanherzig%2Fcommonsenseqa","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjonathanherzig%2Fcommonsenseqa","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjonathanherzig%2Fcommonsenseqa/lists"}