Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/kumar-shridhar/screws
SCREWS: A Modular Framework for Reasoning with Revisions
https://github.com/kumar-shridhar/screws
Last synced: 30 days ago
JSON representation
SCREWS: A Modular Framework for Reasoning with Revisions
- Host: GitHub
- URL: https://github.com/kumar-shridhar/screws
- Owner: kumar-shridhar
- License: mit
- Created: 2023-08-24T23:28:51.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2023-08-24T23:29:34.000Z (about 1 year ago)
- Last Synced: 2023-08-25T00:51:43.503Z (about 1 year ago)
- Size: 1.95 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# SCREWS: A Modular Framework for Reasoning with Revisions
**SCREWS** is a modular reasoning-with-revisions framework to answer reasoning questions with LLMs. More details in the [paper](https://arxiv.org/abs/2309.13075).
![SCREWS](./Images/Screws.png)
## How to run the code
* Clone the repo
```sh
git clone https://github.com/kumar-shridhar/Screws.git
```* Install OpenAI, make an OpenAI account and keep the OpenAI key ready
```sh
pip install openai
```### Sampling
* Start with the `sampling` module by running:
```sh
# CoT
python sample.py --sampling_type cot --openai_key --data_path ./data/test_gsm8k.jsonl --result_path ./results/cot_sample.jsonl --prompt_path ./prompts/cot_sample.txt#Subques
python sample.py --sampling_type subques --openai_key --data_path ./data/test_gsm8k_socratic.jsonl --result_path ./results/subques_sample.jsonl --prompt_path ./prompts/subques_sample.txt
```### Conditional Resampling
* Once `sampling` is done, run the `resampling` code for generating the `CoT` and `Subques` version by:
```sh
# CoT
python re-sample.py --sampling_type cot --openai_key --sample_path ./results/cot_sample.jsonl --result_path ./results/cot_resample.jsonl --resample_prompt_path ./prompts/cot_resample.txt#Subques
python re-sample.py --sampling_type subques --openai_key --sample_path ./results/subques_sample.jsonl --result_path ./results/subques_resample.jsonl --sample_prompt_path ./prompts/subques_sample.txt --resample_prompt_path ./prompts/subques_resample.txt
```### Selection
* Once `sampling` and `resampling` is done, run the `selection` code for choosing the desired output by:
```sh
# CoT
python selection.py --sampling_type cot --openai_key --resample_path ./results/cot_resample.jsonl --result_path ./results/cot_selection.jsonl --prompt_path ./prompts/cot_selection.txt#Subques
python selection.py --sampling_type subques --openai_key --resample_path ./results/subques_resample.jsonl --result_path ./results/subques_selection.jsonl --prompt_path ./prompts/subques_sample.txt --prompt_path ./prompts/cot_selection.txt
```### Calculating Accuracy
```sh
python calculate_accuracy.py --result_file --sampling_type --type
```## Citation
```sql
@misc{shridhar2023screws,
title={SCREWS: A Modular Framework for Reasoning with Revisions},
author={Kumar Shridhar and Harsh Jhamtani and Hao Fang and Benjamin Van Durme and Jason Eisner and Patrick Xia},
year={2023},
eprint={2309.13075},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```