https://github.com/Azure-Samples/vanilla-aiagents
Lightweight library demonstrating how to create agenting application without using any specific framework.
https://github.com/Azure-Samples/vanilla-aiagents
Last synced: 28 days ago
JSON representation
Lightweight library demonstrating how to create agenting application without using any specific framework.
- Host: GitHub
- URL: https://github.com/Azure-Samples/vanilla-aiagents
- Owner: Azure-Samples
- License: mit
- Created: 2024-12-19T16:22:55.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2025-01-27T10:36:43.000Z (8 months ago)
- Last Synced: 2025-01-27T11:45:42.788Z (8 months ago)
- Language: Python
- Homepage:
- Size: 981 KB
- Stars: 18
- Watchers: 10
- Forks: 8
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- Contributing: CONTRIBUTING.md
- License: LICENSE.md
- Code of conduct: .github/CODE_OF_CONDUCT.md
Awesome Lists containing this project
- awesome-azure-openai-copilot - Vanilla AI Agents - Minimal agent patterns without heavy framework abstractions. (Agent Frameworks)
- awesome-azure-openai-copilot - Vanilla AI Agents - Minimal agent patterns without heavy framework abstractions. (Agent Frameworks)
README
![]()
Vanilla AI Agents
Lightweight library demonstrating how to create agenting application without using any specific framework.
## Table of Contents
- [Features](#features)
- [Future work](#future-work)
- [Getting Started](#getting-started)
- [Demos](#demos)
- [Testing](#testing)
- [License](#license)
- [Contributing](#contributing)## Features
This project framework provides the following features:
- Multi-agent chat with several orchestration options
- Dynamic routing (including option to look for available tools to decide)
- Beforehand planning with optional repetition via feedback loop
- Agent state management
- Custom stop conditions
- Interactive or unattended user input
- Chat resumability
- Function calling on agents
- Constrained agent routing
- Sub-workflows
- Simple RAG via function calls
- Image input support
- Ability to run pre and post steps via Sequence
- Conversation context "hidden" variables, which are not displayed to the user but agents can read and write to access additional information
- Usage metrics tracking per conversation, plus internal log for debuggability
- Multiple strategies for agent to filter conversation messages (All, last N, top K and Last N, summarize, etc..)
- LLMLingua (`extras` module) support to compress system prompts via strategies
- LLM support for Structured Output
- DAPR integration
- Native submodule to host `Askable` and `Workflow` as Dapr Actors
- Dapr _PubSub_ integration for `Workflow` to enable
- Event sourcing
- Decoupled communication between workflows
- Multi-agent chat with multiple users
- Dapr PubSub integrations allows to move from one-to-many to many-to-many conversations, with different `User` instances impersonating different user profiles
- Demo Repo (_Coming soon_)
- Remoting support ((`remote` module)), allowing agents to be run on a remote server and accessed elsewhere
- REST and gRPC channels supported
- Default implementation to run hosts with agent discovery and registration
- Generated Code execution locally and via ACA Dynamic Sessions
- Streaming support, even over REST or gRPC agents## Future work
- Plugins
- Azure AI Search plugin
- DB plugin
- API plugin## Getting Started
### Prerequisites
Python 3.11 or later is required to run this project.
### Quickstart
```powershell
git clone https://github.com/Azure-Samples/vanilla-aiagentscd "vanilla-aiagents"
# Create a virtual environment
python -m venv .venv# Activate the virtual environment
# On Windows
.\.venv\Scripts\activate
# On Unix or MacOS
source .venv/bin/activate# Install the required dependencies
pip install -r requirements.txt# Clone .env.sample to .env and update the values
cp .env.sample .env
```Here is a simple example of how to use the framework:
```python
import os
from vanilla_aiagents.llm import AzureOpenAILLM
from vanilla_aiagents.agent import Agent
from vanilla_aiagents.team import Team
from vanilla_aiagents.workflow import Workflowllm = AzureOpenAILLM({
"azure_deployment": os.getenv("AZURE_OPENAI_MODEL"),
"azure_endpoint": os.getenv("AZURE_OPENAI_ENDPOINT"),
"api_key": os.getenv("AZURE_OPENAI_KEY"),
"api_version": os.getenv("AZURE_OPENAI_API_VERSION"),
})# Initialize agents and team
sales = Agent(id="sales", llm=llm, description="A sales agent", system_message="""
You are a sales assistant. You provide information about our products and services.# PRODUCTS
- Product 1: $100, description
- Product 2: $200, description
- Product 3: $300, description
""")
support = Agent(id="support", llm=llm, description="A support agent", system_message="""
You are a support assistant. You provide help with technical issues and account management.# SUPPORT GUIDELINES
- For technical issues, please provide the following information: ...
- For account management, please provide the following information: ...
""")
team = Team(id="team", description="Contoso team", members=[sales, support], llm=llm)# Create a workflow
workflow = Workflow(askable=team)# Run the workflow
result = workflow.run("Hello, I'd like to know more about your products.")
print(workflow.conversation.messages)
```### Submodules
#### `remote`
This module provides a way to run agents on a remote server and access them elsewhere. It includes support for REST and gRPC channels, as well as a default implementation to run hosts with agent discovery and registration.
Additionally, it features Dapr integration, allowing for the hosting of `Askable` and `Workflow` as Dapr Actors, enabling event sourcing and decoupled communication between workflows.
See the [Remote Agents documentation](docs/remote.md) and [Actors documentation](docs/actors.md) for more information.
#### `extras`
This module provides additional features to enhance the functionality of the framework. It includes support for:
- `LLMLingua` to compress system prompts
## Demos
`notebooks` folder contains a few demo notebooks that demonstrate how to use the framework in various scenarios.
## Testing
To run the tests, execute the following command:
```bash
invoke test
```To run run selected tests, execute the following command:
```bash
invoke test --test-case
```Testing also includes code coverage.
## Building
To build the project, run the following command:
```bash
invoke build --version
```Output wheel will be available in the `dist` folder under `vanilla_aiagents` with naming `vanilla_aiagents--py3-none-any.whl`.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to contribute.