Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/Timothyxxx/Chain-of-ThoughtsPapers

A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".
https://github.com/Timothyxxx/Chain-of-ThoughtsPapers

chain-of-thought codex gpt-3 in-context-learning large-language-models palm prompt-learning

Last synced: about 2 months ago
JSON representation

A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".

Awesome Lists containing this project

README

        

# Chain-of-ThoughtsPapers
![](https://img.shields.io/github/last-commit/Timothyxxx/Chain-of-ThoughtsPapers?color=green)
A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".

Check **[Tool use LLMs](https://github.com/xlang-ai/llm-tool-use)** and **[Environment Interactive LLMs](https://github.com/Timothyxxx/EnvInteractiveLMPapers)** for the newest good direction we are doing!

## Papers

1. **Chain of Thought Prompting Elicits Reasoning in Large Language Models.**

*Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, Denny Zhou* [[pdf](https://arxiv.org/abs/2201.11903)] 2022.1

2. **Self-Consistency Improves Chain of Thought Reasoning in Language Models.**

*Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou* [[pdf](https://arxiv.org/abs/2203.11171)] 2022.3

3. **STaR: Self-Taught Reasoner Bootstrapping Reasoning With Reasoning.**

*Eric Zelikman, Yuhuai Wu, Noah D. Goodman* [[pdf](https://arxiv.org/abs/2203.14465)], [[code](https://github.com/ezelikman/STaR)] 2022.3

4. **PaLM: Scaling Language Modeling with Pathways.**

*Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, Noah Fiedel* [[pdf](https://arxiv.org/abs/2204.02311)] 2022.4

5. **Can language models learn from explanations in context?.**

*Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L. McClelland, Jane X. Wang, Felix Hill* [[pdf](https://arxiv.org/abs/2204.02329)] 2022.4

6. **Inferring Implicit Relations with Language Models.**

*Uri Katz, Mor Geva, Jonathan Berant* [[pdf](https://arxiv.org/abs/2204.13778)] 2022.4

7. **The Unreliability of Explanations in Few-Shot In-Context Learning.**

*Xi Ye, Greg Durrett* [[pdf](https://arxiv.org/abs/2205.03401)] 2022.5

8. **Large Language Models are Zero-Shot Reasoners.**

*Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa* [[pdf](https://arxiv.org/abs/2205.11916)], [[code](https://github.com/kojima-takeshi188/zero_shot_cot)] 2022.5

9. **Least-to-Most Prompting Enables Complex Reasoning in Large Language Models.**

*Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, Ed Chi* [[pdf](https://arxiv.org/abs/2205.10625)] 2022.5

10. **Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning.**

*Antonia Creswell, Murray Shanahan, Irina Higgins* [[pdf](https://arxiv.org/abs/2205.09712)] 2022.5

11. **On the Advance of Making Language Models Better Reasoners.**

*Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, Weizhu Chen* [[pdf](https://arxiv.org/abs/2206.02336)] 2022.6

12. **Emergent Abilities of Large Language Models.**

*Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus* [[pdf](https://arxiv.org/abs/2206.07682)] 2022.6

13. **Minerva: Solving Quantitative Reasoning Problems with Language Models.**

*Posted by Ethan Dyer and Guy Gur-Ari, Research Scientists, Google Research, Blueshift Team* [[blog](https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html)] 2022.6

14. **JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem Understanding.**

*Wayne Xin Zhao, Kun Zhou, Zheng Gong, Beichen Zhang, Yuanhang Zhou, Jing Sha, Zhigang Chen, Shijin Wang, Cong Liu, Ji-Rong Wen* [[pdf](https://arxiv.org/abs/2206.06315)], [[code](https://github.com/rucaibox/jiuzhang)] 2022.6

15. **A Dataset and Benchmark for Automatically Answering and Generating Machine Learning Final Exams**

*Sarah Zhang, Reece Shuttleworth, Derek Austin, Yann Hicke, Leonard Tang, Sathwik Karnik, Darnell Granberry, Iddo Drori* [[pdf](https://arxiv.org/abs/2206.05442)] 2022.6

16. **Rationale-Augmented Ensembles in Language Models.**

*Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou* [[pdf](https://arxiv.org/abs/2207.00747)] 2022.7

17. **Language Model Cascades.**

*David Dohan, Winnie Xu, Aitor Lewkowycz, Jacob Austin, David Bieber, Raphael Gontijo Lopes, Yuhuai Wu, Henryk Michalewski, Rif A. Saurous, Jascha Sohl-dickstein, Kevin Murphy, Charles Sutton* [[pdf](https://arxiv.org/abs/2207.10342)], [[code](https://github.com/google-research/cascades)] 2022.7

18. **Text and Patterns: For Effective Chain of Thought, It Takes Two to Tango.**

*Aman Madaan, Amir Yazdanbakhsh* [[pdf](https://arxiv.org/abs/2209.07686)] 2022.9

19. **Compositional Semantic Parsing with Large Language Models.**

*Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, Denny Zhou* [[pdf](https://arxiv.org/abs/2209.15003)] 2022.9

20. **Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning.**

*Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, Ashwin Kalyan* [[pdf](https://arxiv.org/abs/2209.14610)] 2022.9

21. **Language Models are Multilingual Chain-of-Thought Reasoners.**

*Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, Jason Wei* [[pdf](https://arxiv.org/abs/2210.03057)], [[code](https://github.com/google-research/url-nlp)] 2022.10

22. **Automatic Chain of Thought Prompting in Large Language Models.**

*Zhuosheng Zhang, Aston Zhang, Mu Li, Alex Smola* [[pdf](https://arxiv.org/abs/2210.03493)], [[code](https://github.com/amazon-science/auto-cot)] 2022.10

23. **Binding Language Models in Symbolic Languages.**

*Zhoujun Cheng*, Tianbao Xie*, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu* [[pdf](https://arxiv.org/abs/2210.02875)], [[code](https://github.com/hkunlp/binder)] 2022.10

24. **ReAct: Synergizing Reasoning and Acting in Language Models.**

*Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao* [[pdf](https://arxiv.org/abs/2210.03629)], [[code](https://github.com/ysymyth/ReAct)] 2022.10

25. **Ask Me Anything: A simple strategy for prompting language models.**

*Simran Arora, Avanika Narayan, Mayee F. Chen, Laurel Orr, Neel Guha, Kush Bhatia, Ines Chami, Frederic Sala, Christopher Ré* [[pdf](https://arxiv.org/abs/2210.02441)], [[code](https://github.com/HazyResearch/ama_prompting)] 2022.10

26. **Language Models of Code are Few-Shot Commonsense Learners.**

*Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, Graham Neubig* [[pdf](https://arxiv.org/abs/2210.07128)], [[code](https://github.com/madaan/cocogen)] 2022.10

27. **Large Language Models Can Self-Improve.**

*Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, Jiawei Han* [[pdf](https://arxiv.org/abs/2210.11610)] 2022.10

28. **Large Language Models are few(1)-shot Table Reasoners.**

*Wenhu Chen* [[pdf](https://arxiv.org/abs/2210.06710)], [[code](https://github.com/wenhuchen/tablecot)] 2022.10

39. **PAL: Program-aided Language Models.**

*Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, Graham Neubig* [[pdf](https://arxiv.org/abs/2211.10435)] 2022.11

30. **Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks.**

*Wenhu Chen, Xueguang Ma, Xinyi Wang, William W. Cohen* [[pdf](https://arxiv.org/abs/2211.12588)], [[code](https://github.com/wenhuchen/program-of-thoughts)] 2022.11

31. **Self-Prompting Large Language Models for Zero-Shot Open-Domain QA.**

*Junlong Li, Zhuosheng Zhang, Hai Zhao* [[pdf](https://arxiv.org/abs/2212.08635)] 2022.12

32. **Reasoning with Language Model Prompting: A Survey.**

*Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Huajun Chen* [[pdf](https://arxiv.org/abs/2212.09597)], [[code](https://github.com/zjunlp/Prompt4ReasoningPapers)] 2022.12

33. **Towards Reasoning in Large Language Models: A Survey.**

*Jie Huang, Kevin Chen-Chuan Chang* [[pdf](https://arxiv.org/abs/2212.10403)], [[code](https://github.com/jeffhj/lm-reasoning)] 2022.12

34. **Large Language Models are reasoners with Self-Verification.**

*Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, Jun Zhao* [[pdf](https://arxiv.org/abs/2212.09561)] [[code](https://github.com/WENGSYX/Self-Verification)] 2022.12

35. **Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters.**

*Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, Huan Sun* [[pdf](https://arxiv.org/abs/2212.10001)], [[code](https://github.com/sunlab-osu/Understanding-CoT)] 2022.12

36. **Large Language Models Are Reasoning Teachers.**

*Namgyu Ho, Laura Schmid, Se-Young Yun* [[pdf](https://arxiv.org/abs/2212.10071)] [[code](https://github.com/itsnamgyu/reasoning-teacher)] 2022.12

37. **Faithful Chain-of-Thought Reasoning**

*Qing Lyu\*, Shreya Havaldar\*, Adam Stein\*, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, Chris Callison-Burch* [[pdf](https://arxiv.org/abs/2301.13379)], [[code](https://github.com/veronica320/Faithful-COT)] 2023.01

38. **Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning**

*Yunhu Ye, Binyuan Hui, Min Yang, Binhua Li, Fei Huang, Yongbin Li* [[pdf](https://arxiv.org/abs/2301.13808)], [[code](https://github.com/itsnamgyu/reasoning-teacher)] 2023.02

39. **Multimodal Chain-of-Thought Reasoning in Language Models**

*Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, Alex Smola* [[pdf](https://arxiv.org/abs/2302.00923)], [[code](https://github.com/amazon-science/mm-cot)] 2023.02

40. **Large Language Models Can Be Easily Distracted by Irrelevant Context**

*Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli, Denny Zhou* [[pdf](https://arxiv.org/abs/2302.00093)], [[code](https://github.com/google-research-datasets/gsm-ic)] 2023.02

41. **Active Prompting with Chain-of-Thought for Large Language Models**

*Shizhe Diao, Pengcheng Wang, Yong Lin, Tong Zhang* [[pdf](https://arxiv.org/abs/2302.12246)], [[code](https://github.com/shizhediao/active-prompt)] 2023.02

42. **MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action**

*Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, Lijuan Wang* [[pdf](https://arxiv.org/abs/2303.11381)], [[code](https://github.com/microsoft/MM-REACT)] 2023.03

43. **Exploring Human-Like Translation Strategy with Large Language Models**

*Zhiwei He, Tian Liang, Wenxiang Jiao, Zhuosheng Zhang, Yujiu Yang, Rui Wang, Zhaopeng Tu, Shuming Shi, Xing Wang* [[pdf](https://arxiv.org/abs/2305.04118)], [[code](https://github.com/zwhe99/MAPS-mt)] 2023.05

44. **Reasoning Implicit Sentiment with Chain-of-Thought Prompting**

*Hao Fei, Bobo Li, Qian Liu, Lidong Bing, Fei Li, Tat-Seng Chua* [[pdf](https://arxiv.org/abs/2305.11255)], [[code](https://github.com/scofield7419/THOR-ISA)] 2023.05

45. **Element-aware Summarization with Large Language Models: Expert-aligned Evaluation and Chain-of-Thought Method**

*Yiming Wang, Zhuosheng Zhang, Rui Wang* [[pdf](https://arxiv.org/abs/2305.13412)], [[code](https://github.com/Alsace08/SumCoT)] 2023.05

46. **Chain Of Thought Prompting Under Streaming Batch: A Case Study**

*Yuxin Tang* [[pdf](https://arxiv.org/abs/2306.00550)] 2023.06

47. **Tab-CoT: Zero-shot Tabular Chain of Thought**

*Ziqi Jin and Wei Lu* [[pdf](https://arxiv.org/abs/2305.17812)], [[code](https://github.com/Xalp/Tab-CoT)] 2023.06

48. **Reasoning with Language Model is Planning with World Model**

*Shibo Hao\*, Yi Gu\*, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, Zhiting Hu* [[pdf](https://arxiv.org/abs/2305.14992)], [[code](https://github.com/Ber666/RAP)] 2023.05

49. **Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with Language Models**

*Soochan Lee and Gunehee Kim* [[pdf](https://arxiv.org/abs/2306.06891)], [[code](https://github.com/soochan-lee/RoT)], [[poster](https://soochanlee.com/img/rot/rot_poster.pdf)]

50. **The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning**

*Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo* [[pdf](https://arxiv.org/abs/2305.14045)]

51. **Cumulative Reasoning with Large Language Models**

*Yifan Zhang\*, Jinqqin Yang\*, Yang Yuan, Andrew Chi-Chih Yao* [[pdf](https://arxiv.org/abs/2308.04371)], [[code](https://github.com/iiis-ai/cumulative-reasoning)] 2023.08