Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
LM-reasoning
This repository contains a collection of papers and resources on Reasoning in Large Language Models.
https://github.com/jeffhj/LM-reasoning
Last synced: 4 days ago
JSON representation
-
Technique
-
Fully Supervised Finetuning
- our survey
- Explain Yourself! Leveraging Language Models for Commonsense Reasoning
- Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason Over Implicit Knowledge
- Measuring Mathematical Problem Solving With the MATH Dataset
- Show Your Work: Scratchpads for Intermediate Computation with Language Models
- FaiRR: Faithful and Robust Deductive Reasoning over Natural Language
-
Prompting and In-Context Learning
- Psychologically-informed chain-of-thought prompts for metaphor understanding in large language models
- Language Models are Multilingual Chain-of-Thought Reasoners
- Large Language Models are few(1)-shot Table Reasoners
- Language Models of Code are Few-Shot Commonsense Learners
- PaL: Program-Aided Language Model
- Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks
- Rethinking with Retrieval: Faithful Large Language Model Inference
- Chain of Thought Prompting Elicits Reasoning in Large Language Models
- Iteratively Prompt Pre-trained Language Models for Chain of Thought
- Training Verifiers to Solve Math Word Problems
- Large Language Models are Zero-Shot Reasoners
- Self-Consistency Improves Chain of Thought Reasoning in Language Models
- On the Advance of Making Language Models Better Reasoners
- Complexity-Based Prompting for Multi-Step Reasoning
- Automatic Chain of Thought Prompting in Large Language Models
- Teaching Algorithmic Reasoning via In-context Learning
- Large Language Models are reasoners with Self-Verification
- Least-to-Most Prompting Enables Complex Reasoning in Large Language Models
- Compositional Semantic Parsing with Large Language Models
- Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning
- Decomposed Prompting: A Modular Approach for Solving Complex Tasks
- Measuring and Narrowing the Compositionality Gap in Language Models
- Successive Prompting for Decomposing Complex Questions
- Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning
- Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents
- Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations
- Faithful Reasoning Using Large Language Models
- Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
- Explanations from Large Language Models Make Small Reasoners Better
- Distilling Multi-Step Reasoning Capabilities of Large Language Models into Smaller Models via Semantic Decompositions
- Teaching Small Language Models to Reason
- LAMBADA: Backward Chaining for Automated Reasoning in Natural Language
- Reasoning with Language Model is Planning with World Model
-
Hybrid Method
- Reasoning Like Program Executors
- Solving Quantitative Reasoning Problems with Language Models
- Scaling Instruction-Finetuned Language Models
- Galactica: A Large Language Model for Science
- ALERT: Adapting Language Models to Reasoning Tasks
- STaR: Bootstrapping Reasoning With Reasoning
- Language Models Can Teach Themselves to Program Better
- Large Language Models Can Self-Improve
-
-
Relevant Survey and Position Paper and Blog
- Emergent Abilities of Large Language Models
- Language Model Cascades
- How does GPT Obtain its Ability? Tracing Emergent Abilities of Language Models to their Sources
- Reasoning with Language Model Prompting: A Survey
- A Survey of Deep Learning for Mathematical Reasoning
- A Survey for In-context Learning
- Logical Reasoning over Natural Language as Knowledge Representation: A Survey
- Nature Language Reasoning, A Survey
-
Evaluation and Analysis
-
Hybrid Method
- Are NLP Models really able to Solve Simple Math Word Problems?
- Impact of Pretraining Term Frequencies on Few-Shot Reasoning
- Are Large Pre-Trained Language Models Leaking Your Personal Information?
- Large Language Models Still Can't Plan (A Benchmark for LLMs on Planning and Reasoning about Change)
- Language models show human-like content effects on reasoning
- FOLIO: Natural Language Reasoning with First-Order Logic
- Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
- Large language models are not zero-shot communicators
- ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning
- Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters
- Exploring Length Generalization in Large Language Models
-