https://github.com/expectedparrot/edsl
Design, conduct and analyze results of AI-powered surveys and experiments. Simulate social science and market research with large numbers of AI agents and LLMs.
https://github.com/expectedparrot/edsl
anthropic data-labeling deepinfra domain-specific-language experiments llama2 llm llm-agent llm-framework llm-inference market-research mixtral open-source openai python social-science surveys synthetic-data
Last synced: 5 months ago
JSON representation
Design, conduct and analyze results of AI-powered surveys and experiments. Simulate social science and market research with large numbers of AI agents and LLMs.
- Host: GitHub
- URL: https://github.com/expectedparrot/edsl
- Owner: expectedparrot
- License: mit
- Created: 2024-01-02T22:18:47.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2025-05-07T11:01:07.000Z (5 months ago)
- Last Synced: 2025-05-07T11:23:54.918Z (5 months ago)
- Topics: anthropic, data-labeling, deepinfra, domain-specific-language, experiments, llama2, llm, llm-agent, llm-framework, llm-inference, market-research, mixtral, open-source, openai, python, social-science, surveys, synthetic-data
- Language: Python
- Homepage: https://docs.expectedparrot.com
- Size: 58.7 MB
- Stars: 238
- Watchers: 6
- Forks: 24
- Open Issues: 153
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- Contributing: docs/contributing.rst
- License: LICENSE
Awesome Lists containing this project
README
# Expected Parrot Domain-Specific Language (EDSL)
The Expected Parrot Domain-Specific Language (EDSL) package makes it easy to conduct computational social science and market research with AI. Use it to design surveys and experiments, collect responses from humans and large language models, and perform data labeling and many other research tasks. Results are formatted as specified datasets and come with built-in methods for analyzing, visualizing and sharing.
![]()
## Features
**Declarative design**:
Specified question types ensure consistent results without requiring a JSON schema (view at Coop):```python
from edsl import QuestionMultipleChoiceq = QuestionMultipleChoice(
question_name = "example",
question_text = "How do you feel today?",
question_options = ["Bad", "OK", "Good"]
)results = q.run()
results.select("example")
```> | answer.example |
> |-----------------|
> | Good |
**Parameterized prompts**:
Easily parameterize and control prompts with "scenarios" of data automatically imported from many sources (CSV, PDF, PNG, etc.)(view at Coop):```python
from edsl import ScenarioList, QuestionLinearScaleq = QuestionLinearScale(
question_name = "example",
question_text = "How much do you enjoy {{ activity }}?",
question_options = [1,2,3,4,5,],
option_labels = {1:"Not at all", 5:"Very much"}
)sl = ScenarioList.from_list("activity", ["coding", "sleeping"])
results = q.by(sl).run()
results.select("activity", "example")
```> | scenario.activity | answer.example |
> |--------------------|-----------------|
> | Coding | 5 |
> | Sleeping | 5 |
**Design AI agent personas to answer questions**:
Construct agents with relevant traits to provide diverse responses to your surveys (view at Coop)```python
from edsl import Agent, AgentList, QuestionListal = AgentList(Agent(traits = {"persona":p}) for p in ["botanist", "detective"])
q = QuestionList(
question_name = "example",
question_text = "What are your favorite colors?",
max_list_items = 3
)results = q.by(al).run()
results.select("persona", "example")
```> | agent.persona | answer.example |
> |----------------|---------------------------------------------|
> | botanist | ['Green', 'Earthy Brown', 'Sunset Orange'] |
> | detective | ['Gray', 'Black', 'Navy Blye'] |
**Simplified access to LLMs**:
Choose whether to use your own keys for LLMs, or access all available models with an Expected Parrot API key. Run surveys with many models at once and compare responses at a convenient inferface (view at Coop)```python
from edsl import Model, ModelList, QuestionFreeTextml = ModelList(Model(m) for m in ["gpt-4o", "gemini-1.5-flash"])
q = QuestionFreeText(
question_name = "example",
question_text = "What is your top tip for using LLMs to answer surveys?"
)results = q.by(ml).run()
results.select("model", "example")
```> | model.model | answer.example |
> |--------------------|-------------------------------------------------------------------------------------------------|
> | gpt-4o | When using large language models (LLMs) to answer surveys, my top tip is to ensure that the ... |
> | gemini-1.5-flash | My top tip for using LLMs to answer surveys is to **treat the LLM as a sophisticated brainst... |
**Piping & skip-logic**:
Build rich data labeling flows with features for piping answers and adding survey logic such as skip and stop rules (view at Coop):```python
from edsl import QuestionMultipleChoice, QuestionFreeText, Surveyq1 = QuestionMultipleChoice(
question_name = "color",
question_text = "What is your favorite primary color?",
question_options = ["red", "yellow", "blue"]
)q2 = QuestionFreeText(
question_name = "flower",
question_text = "Name a flower that is {{ color.answer }}."
)survey = Survey(questions = [q1, q2])
results = survey.run()
results.select("color", "flower")
```> | answer.color | answer.flower |
> |---------------|-----------------------------------------------------------------------------------|
> | blue | A commonly known blue flower is the bluebell. Another example is the cornflower. |
**Caching**:
API calls to LLMs are cached automatically, allowing you to retrieve responses to questions that have already been run and reproduce experiments at no cost. Learn more about how the universal remote cache works.**Logging**:
EDSL includes a comprehensive logging system to help with debugging and monitoring. Control log levels and see important information about operations:```python
from edsl import logger
import logging# Set the logging level
logger.set_level(logging.DEBUG) # Show all log messages# Get a module-specific logger
my_logger = logger.get_logger(__name__)
my_logger.info("This is a module-specific log message")# Log messages at different levels
logger.debug("Detailed debugging information")
logger.info("General information about operation")
logger.warning("Something unexpected but not critical")
logger.error("Something went wrong")
```**Flexibility**:
Choose whether to run surveys on your own computer or at the Expected Parrot server.**Tools for collaboration**:
Easily share workflows and projects privately or publicly at Coop: an integrated platform for AI-based research. Your account comes with free credits for running surveys, and lets you securely share keys, track expenses and usage for your team.**Built-in tools for analyis**:
Analyze results as specified datasets from your account or workspace. Easily import data to use with your surveys and export results.## Getting started
1. Run `pip install edsl` to install the package.
2. Create an account to run surveys at the Expected Parrot server and access a universal remote cache of stored responses for reproducing results.
3. Choose whether to use your own keys for language models or get an Expected Parrot key to access all available models at once. Securely manage keys, share expenses and track usage for your team from your account.
4. Run the starter tutorial and explore other demo notebooks.
5. Share workflows and survey results at Coop
6. Join our Discord for updates and discussions! Request new features!
## Code & Docs
- PyPI
- GitHub
- Documentation## Requirements
- Python 3.9 - 3.12
- API keys for language models. You can use your own keys or an Expected Parrot key that provides access to all available models.
See instructions on managing keys and model pricing and performance information.## Developer Notes
### Running Tests
- Unit tests: `python -m pytest tests/`
- Integration tests: `python -m pytest integration/`
- Doctests: `python run_doctests.py` (use `-v` flag for verbose output)## Coop
An integrated platform for running experiments, sharing workflows and launching hybrid human/AI surveys.
- Login / Signup
- Explore## Community
- Discord
- Blog.## Contact
- Email.