Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/stanfordnlp/dspy
DSPy: The framework for programming—not prompting—language models
https://github.com/stanfordnlp/dspy
Last synced: 4 days ago
JSON representation
DSPy: The framework for programming—not prompting—language models
- Host: GitHub
- URL: https://github.com/stanfordnlp/dspy
- Owner: stanfordnlp
- License: mit
- Created: 2023-01-09T21:01:51.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-11-30T04:08:05.000Z (13 days ago)
- Last Synced: 2024-12-01T10:05:09.926Z (12 days ago)
- Language: Python
- Homepage: https://dspy.ai
- Size: 36.1 MB
- Stars: 19,379
- Watchers: 145
- Forks: 1,471
- Open Issues: 196
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-adaptive-computation - code
- awesome-repositories - stanfordnlp/dspy - DSPy: The framework for programming—not prompting—foundation models (Python)
- AiTreasureBox - stanfordnlp/dspy - 12-07_19711_14](https://img.shields.io/github/stars/stanfordnlp/dspy.svg)|Stanford DSPy: The framework for programming—not prompting—foundation models| (Repos)
- awesome-generative-ai - 🔥🔥🔥
- awesome-llm-json - DSPy - blocks/8-typed_predictors.md) to leverage [Pydantic](https://github.com/pydantic/pydantic) for enforcing type constraints on inputs and outputs, improving upon string-based fields. (Python Libraries)
- StarryDivineSky - stanfordnlp/dspy - 3.5 or GPT-4 )和本地模型(如 T5-base or Llama2-13b )在任务中更加可靠,即具有更高的质量和/或避免特定的故障模式。DSPy 优化器会将同一程序“编译”为不同的指令、小样本提示和/或每个 LM 的权重更新(微调)。这是一种新的范式,在这种范式中,LM 及其提示逐渐淡出背景,作为可以从数据中学习的更大系统的可优化部分。顶级域名;更少的提示,更高的分数,以及更系统地解决 LM 的艰巨任务的方法。 (A01_文本生成_文本对话 / 大语言对话模型及数据)
- Awesome-LLM - dspy - DSPy: The framework for programming—not prompting—foundation models. (LLM Applications)
- awesome-deliberative-prompting - [>code
- awesome-langchain - DSPy
- awesome-production-machine-learning - dspy - A framework for programming with foundation models. (Industry Strength NLP)
- awesome-agents - dspy
- awesome-agents - dspy
- Awesome-LLM-RAG-Application - DSPy
- awesome-agents - DSPy
- awesome - stanfordnlp/dspy - DSPy: The framework for programming—not prompting—language models (Python)
- awesome - stanfordnlp/dspy - DSPy: The framework for programming—not prompting—language models (Python)
README
## DSPy: _Programming_—not prompting—Foundation Models
**Documentation:** [DSPy Docs](https://dspy.ai/)
[![Downloads](https://static.pepy.tech/badge/dspy-ai)](https://pepy.tech/project/dspy-ai) [![Downloads](https://static.pepy.tech/badge/dspy/month)](https://pepy.tech/project/dspy)
----
DSPy is the framework for _programming—rather than prompting—language models_. It allows you to iterate fast on **building modular AI systems** and offers algorithms for **optimizing their prompts and weights**, whether you're building simple classifiers, sophisticated RAG pipelines, or Agent loops.
DSPy stands for Declarative Self-improving Python. Instead of brittle prompts, you write compositional _Python code_ and use DSPy to **teach your LM to deliver high-quality outputs**. Learn more via our [official documentation site](https://dspy.ai/) or meet the community, seek help, or start contributing via this GitHub repo and our [Discord server](https://discord.gg/XCGy2WDCQB).
## Documentation: [dspy.ai](https://dspy.ai)
**Please go to the [DSPy Docs at dspy.ai](https://dspy.ai)**
## Installation
```bash
pip install dspy
```To install the very latest from `main`:
```bash
pip install git+https://github.com/stanfordnlp/dspy.git
````## 📜 Citation & Reading More
**[Jun'24] [Optimizing Instructions and Demonstrations for Multi-Stage Language Model Programs](https://arxiv.org/abs/2406.11695)**
**[Oct'23] [DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines](https://arxiv.org/abs/2310.03714)**
[Jul'24] [Fine-Tuning and Prompt Optimization: Two Great Steps that Work Better Together](https://arxiv.org/abs/2407.10930)
[Jun'24] [Prompts as Auto-Optimized Training Hyperparameters](https://arxiv.org/abs/2406.11706)
[Feb'24] [Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models](https://arxiv.org/abs/2402.14207)
[Jan'24] [In-Context Learning for Extreme Multi-Label Classification](https://arxiv.org/abs/2401.12178)
[Dec'23] [DSPy Assertions: Computational Constraints for Self-Refining Language Model Pipelines](https://arxiv.org/abs/2312.13382)
[Dec'22] [Demonstrate-Search-Predict: Composing Retrieval & Language Models for Knowledge-Intensive NLP](https://arxiv.org/abs/2212.14024.pdf)To stay up to date or learn more, follow [@lateinteraction](https://twitter.com/lateinteraction) on Twitter.
The **DSPy** logo is designed by **Chuyi Zhang**.
If you use DSPy or DSP in a research paper, please cite our work as follows:
```
@inproceedings{khattab2024dspy,
title={DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines},
author={Khattab, Omar and Singhvi, Arnav and Maheshwari, Paridhi and Zhang, Zhiyuan and Santhanam, Keshav and Vardhamanan, Sri and Haq, Saiful and Sharma, Ashutosh and Joshi, Thomas T. and Moazam, Hanna and Miller, Heather and Zaharia, Matei and Potts, Christopher},
journal={The Twelfth International Conference on Learning Representations},
year={2024}
}
@article{khattab2022demonstrate,
title={Demonstrate-Search-Predict: Composing Retrieval and Language Models for Knowledge-Intensive {NLP}},
author={Khattab, Omar and Santhanam, Keshav and Li, Xiang Lisa and Hall, David and Liang, Percy and Potts, Christopher and Zaharia, Matei},
journal={arXiv preprint arXiv:2212.14024},
year={2022}
}
```