https://github.com/vicentereig/dspy.rb
The Ruby framework for programming—rather than prompting—language models.
https://github.com/vicentereig/dspy.rb
ai dspy llm rails ruby
Last synced: 7 days ago
JSON representation
The Ruby framework for programming—rather than prompting—language models.
- Host: GitHub
- URL: https://github.com/vicentereig/dspy.rb
- Owner: vicentereig
- Created: 2025-03-15T14:22:18.000Z (2 months ago)
- Default Branch: main
- Last Pushed: 2025-04-12T23:31:16.000Z (about 1 month ago)
- Last Synced: 2025-04-13T00:23:50.425Z (about 1 month ago)
- Topics: ai, dspy, llm, rails, ruby
- Language: Ruby
- Homepage:
- Size: 121 KB
- Stars: 3
- Watchers: 1
- Forks: 0
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# DSPy.rb
A Ruby port of the [DSPy library](https://dspy.ai/), enabling a composable and pipeline-oriented approach to programming with Large Language Models (LLMs) in Ruby.
## Current State
DSPy.rb provides a foundation for composable LLM programming with the following implemented features:
- **Signatures**: Define input/output schemas for LLM interactions using JSON schemas
- **Predict**: Basic LLM completion with structured inputs and outputs
- **Chain of Thought**: Enhanced reasoning through step-by-step thinking
- **RAG (Retrieval-Augmented Generation)**: Enriched responses with context from retrieval
- **Multi-stage Pipelines**: Compose multiple LLM calls in a structured workflowThe library currently supports:
- OpenAI and Anthropic via [Ruby LLM](https://github.com/crmne/ruby_llm)
- JSON schema validation with [dry-schema](https://dry-rb.org/gems/dry-schema/)## Installation
This is not even fresh off the oven. I recommend you installing
it straight from this repo, while I build the first release.```ruby
gem 'dspy', github: 'vicentereig/dspy.rb'
```## Usage Examples
### Basic Prediction
```ruby
# Define a signature for sentiment classification
class Classify < DSPy::Signature
description "Classify sentiment of a given sentence."input do
required(:sentence).value(:string).meta(description: 'The sentence to analyze')
endoutput do
required(:sentiment).value(included_in?: %w(positive negative neutral))
.meta(description: 'The sentiment classification')
required(:confidence).value(:float).meta(description: 'Confidence score')
end
end# Initialize the language model
class SentimentClassifierWithDescriptions < DSPy::Signature
description "Classify sentiment of a given sentence."input do
required(:sentence)
.value(:string)
.meta(description: 'The sentence whose sentiment you are analyzing')
endoutput do
required(:sentiment)
.value(included_in?: [:positive, :negative, :neutral])
.meta(description: 'The allowed values to classify sentences')required(:confidence).value(:float)
.meta(description:'The confidence score for the classification')
end
end
DSPy.configure do |c|
c.lm = DSPy::LM.new('openai/gpt-4o-mini', api_key: ENV['OPENAI_API_KEY'])
end
# Create the predictor and run inference
classify = DSPy::Predict.new(Classify)
result = classify.call(sentence: "This book was super fun to read, though not the last chapter.")
# => {:confidence=>0.85, :sentence=>"This book was super fun to read, though not the last chapter.", :sentiment=>"positive"}
```### Chain of Thought Reasoning
```ruby
class AnswerPredictor < DSPy::Signature
description "Provides a concise answer to the question"input do
required(:question).value(:string)
end
output do
required(:answer).value(:string)
end
endDSPy.configure do |c|
c.lm = DSPy::LM.new('openai/gpt-4o-mini', api_key: ENV['OPENAI_API_KEY'])
endqa_cot = DSPy::ChainOfThought.new(AnswerPredictor)
response = qa_cot.call(question: "Two dice are tossed. What is the probability that the sum equals two?")
# Result includes reasoning and answer in the response
# {:question=>"...", :answer=>"1/36", :reasoning=>"There is only one way to get a sum of 2..."}
```### RAG (Retrieval-Augmented Generation)
```ruby
class ContextualQA < DSPy::Signature
description "Answers questions using relevant context"
input do
required(:context).value(Types::Array.of(:string))
required(:question).filled(:string)
endoutput do
required(:response).filled(:string)
end
endDSPy.configure do |c|
c.lm = DSPy::LM.new('openai/gpt-4o-mini', api_key: ENV['OPENAI_API_KEY'])
end# Set up retriever (example using ColBERT)
retriever = ColBERTv2.new(url: 'http://your-retriever-endpoint')
# Generate a contextual response
rag = DSPy::ChainOfThought.new(ContextualQA)
prediction = rag.call(question: question, context: retriever.call('your query').map(&:long_text))
```### Multi-stage Pipeline
```ruby
# Create a pipeline for article drafting
class ArticleDrafter < DSPy::Module
def initialize
@build_outline = DSPy::ChainOfThought.new(Outline)
@draft_section = DSPy::ChainOfThought.new(DraftSection)
enddef forward(topic)
# First build the outline
outline = @build_outline.call(topic: topic)
# Then draft each section
sections = []
(outline[:section_subheadings] || {}).each do |heading, subheadings|
section = @draft_section.call(
topic: outline[:title],
section_heading: "## #{heading}",
section_subheadings: [subheadings].flatten.map { |sh| "### #{sh}" }
)
sections << section
endDraftArticle.new(title: outline[:title], sections: sections)
end
endDSPy.configure do |c|
c.lm = DSPy::LM.new('openai/gpt-4o-mini', api_key: ENV['OPENAI_API_KEY'])
end
# Usage
drafter = ArticleDrafter.new
article = drafter.call("World Cup 2002")
```## Roadmap
### First Release
- [x] Signatures and Predict module
- [x] RAG examples
- [x] Multi-Stage Pipelines
- [x] Validate inputs and outputs with JSON Schema
- [x] thread-safe global config
- [x] Convert responses from hashes to Dry Poros (currently tons of footguns with hashes :fire:)
- [ ] Cover unhappy paths: validation errors
- [ ] Implement ReAct module for reasoning and acting
- [ ] Add OpenTelemetry instrumentation
- [ ] Improve logging
- [ ] Add streaming support (?)
- [x] Ensure thread safety
- [ ] Comprehensive initial documentation#### Backburner
- [ ] Support for multiple LM providers (Anthropic, etc.)
- [ ] Support for reasoning providers
- [ ] Adaptive Graph of Thoughts with Tools### Optimizers
- [ ] Optimizing prompts: RAG
- [ ] Optimizing prompts: Chain of Thought
- [ ] Optimizing prompts: ReAct
- [ ] Optimizing weights: Classification## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the MIT License - see the LICENSE.txt file for details.