Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/voxos-ai/bolna

End-to-end platform for building voice first multimodal agents
https://github.com/voxos-ai/bolna

anyscale chatgpt-api claude-3-sonnet deepgram elevenlabs fastapi gpt-4o llama3 llm mistral openai perplexity-api polly telephony twilio voice-assistant websocket-chat websockets whisper xtts

Last synced: about 1 month ago
JSON representation

End-to-end platform for building voice first multimodal agents

Awesome Lists containing this project

README

        




End-to-end open-source voice agents platform: Quickly build voice firsts conversational assistants through a json.


Discord |
Docs |
Website



Discord


Bolna is released under the MIT license.


PRs welcome!

## Introduction

**[Bolna](https://bolna.dev)** is the end-to-end open source production ready framework for quickly building LLM based voice driven conversational applications.

## Demo
https://github.com/bolna-ai/bolna/assets/1313096/2237f64f-1c5b-4723-b7e7-d11466e9b226

## Components
Bolna helps you create AI Voice Agents which can be instructed to do tasks beginning with:

1. Initiating a phone call using telephony providers like `Twilio`, `Plivo`, `Exotel`, etc.
2. Transcribing the conversations using `Deepgram`, etc.
3. Using LLMs like `OpenAI`, `Llama`, `Cohere`, `Mistral`, etc to handle conversations
4. Synthesizing LLM responses back to telephony using `AWS Polly`, `XTTS`, `ElevenLabs`, `Deepgram` etc.
5. Instructing the Agent to perform tasks like sending emails, text messages, booking calendar after the conversation has ended

Refer to the [docs](https://docs.bolna.dev/providers) for a deepdive into all supported providers.

## Local example setup
A basic local setup includes usage of [Twilio](local_setup/telephony_server/twilio_api_server.py) or [Plivo](local_setup/telephony_server/plivo_api_server.py) for telephony. We have dockerized the setup in `local_setup/`. One will need to populate an environment `.env` file from `.env.sample`.

The setup consists of four containers:

1. Telephony web server:
* Choosing Twilio: for initiating the calls one will need to set up a [Twilio account](https://www.twilio.com/docs/usage/tutorials/how-to-use-your-free-trial-account)
* Choosing Plivo: for initiating the calls one will need to set up a [Plivo account](https://www.plivo.com/)
2. Bolna server: for creating and handling agents
3. `ngrok`: for tunneling. One will need to add the `authtoken` to `ngrok-config.yml`
4. `redis`: for persisting agents & prompt data

Use docker to build the images using `.env` file as the environment file and run them locally
1. `docker-compose build --no-cache `: rebuild images
2. `docker-compose up `: run the build images

Once the docker containers are up, you can now start to create your agents and instruct them to initiate calls.

## Creating your agent and invoking calls
Once you have the above docker setup and running, you can create agents and initiate calls.
1. Use the below payload to create an Agent via `http://localhost:5001/agent`

Agent Payload

```yaml
{
"agent_config": {
"agent_name": "Alfred",
"agent_type": "other",
"agent_welcome_message": "Welcome",
"tasks": [
{
"task_type": "conversation",
"toolchain": {
"execution": "parallel",
"pipelines": [
[
"transcriber",
"llm",
"synthesizer"
]
]
},
"tools_config": {
"input": {
"format": "pcm",
"provider": "twilio"
},
"llm_agent": {
"agent_flow_type": "streaming",
"provider": "openai",
"request_json": true,
"model": "gpt-3.5-turbo-16k",
"use_fallback": true
},
"output": {
"format": "pcm",
"provider": "twilio"
},
"synthesizer": {
"audio_format": "wav",
"provider": "elevenlabs",
"stream": true,
"provider_config": {
"voice": "Meera - high quality, emotive",
"model": "eleven_turbo_v2_5",
"voice_id": "TTa58Hl9lmhnQEvhp1WM"
},
"buffer_size": 100.0
},
"transcriber": {
"encoding": "linear16",
"language": "en",
"provider": "deepgram",
"stream": true
}
},
"task_config": {
"hangup_after_silence": 30.0
}
}
]
},
"agent_prompts": {
"task_1": {
"system_prompt": "Ask if they are coming for party tonight"
}
}
}
```

2. The response of the previous API will return a uuid as the `agent_id`. Use this `agent_id` to initiate a call via the telephony server running on `8001` port (for Twilio) or `8002` port (for Plivo) at `http://localhost:8001/call`

Call Payload

```yaml
{
"agent_id": "4c19700b-227c-4c2d-8bgf-42dfe4b240fc",
"recipient_phone_number": "+19876543210",
}
```

## Using your own providers
You can populate the `.env` file to use your own keys for providers.

ASR Providers

These are the current supported ASRs Providers:

| Provider | Environment variable to be added in `.env` file |
|--------------|-------------------------------------------------|
| Deepgram | `DEEPGRAM_AUTH_TOKEN` |

 

LLM Providers

Bolna uses LiteLLM package to support multiple LLM integrations.

These are the current supported LLM Provider Family:
https://github.com/bolna-ai/bolna/blob/477e08d6800dbf02931abeeea883d78451b7d7e2/bolna/providers.py#L29-L44

For LiteLLM based LLMs, add either of the following to the `.env` file depending on your use-case:


`LITELLM_MODEL_API_KEY`: API Key of the LLM

`LITELLM_MODEL_API_BASE`: URL of the hosted LLM

`LITELLM_MODEL_API_VERSION`: API VERSION for LLMs like Azure

For LLMs hosted via VLLM, add the following to the `.env` file:

`VLLM_SERVER_BASE_URL`: URL of the hosted LLM using VLLM

 

TTS Providers

These are the current supported TTS Providers:
https://github.com/bolna-ai/bolna/blob/c8a0d1428793d4df29133119e354bc2f85a7ca76/bolna/providers.py#L7-L14

| Provider | Environment variable to be added in `.env` file |
|------------|--------------------------------------------------|
| AWS Polly | Accessed from system wide credentials via ~/.aws |
| Elevenlabs | `ELEVENLABS_API_KEY` |
| OpenAI | `OPENAI_API_KEY` |
| Deepgram | `DEEPGRAM_AUTH_TOKEN` |

 

Telephony Providers

These are the current supported Telephony Providers:

| Provider | Environment variable to be added in `.env` file |
|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Twilio | `TWILIO_ACCOUNT_SID`, `TWILIO_AUTH_TOKEN`, `TWILIO_PHONE_NUMBER`|
| Plivo | `PLIVO_AUTH_ID`, `PLIVO_AUTH_TOKEN`, `PLIVO_PHONE_NUMBER`|

## Extending with other Telephony Providers
In case you wish to extend and add some other Telephony like Vonage, Telnyx, etc. following the guidelines below:
1. Make sure bi-directional streaming is supported by the Telephony provider
2. Add the telephony-specific input handler file in [input_handlers/telephony_providers](https://github.com/bolna-ai/bolna/tree/master/bolna/input_handlers/telephony_providers) writing custom functions extending from the [telephony.py](https://github.com/bolna-ai/bolna/blob/master/bolna/input_handlers/telephony.py) class
1. This file will mainly contain how different types of event packets are being ingested from the telephony provider
3. Add telephony-specific output handler file in [output_handlers/telephony_providers](https://github.com/bolna-ai/bolna/tree/master/bolna/output_handlers/telephony_providers) writing custom functions extending from the [telephony.py](https://github.com/bolna-ai/bolna/blob/master/bolna/output_handlers/telephony.py) class
1. This mainly concerns converting audio from the synthesizer class to a supported audio format and streaming it over the websocket provided by the telephony provider
4. Lastly, you'll have to write a dedicated server like the example [twilio_api_server.py](https://github.com/bolna-ai/bolna/blob/master/local_setup/telephony_server/twilio_api_server.py) provided in [local_setup](https://github.com/bolna-ai/bolna/blob/master/local_setup/telephony_server) to initiate calls over websockets.

## Open-source v/s Paid
Though the repository is completely open source, you can connect with us if interested in managed hosted offerings or more customized solutions.

Schedule a meeting

## Contributing
We love all types of contributions: whether big or small helping in improving this community resource.

1. There are a number of [open issues present](https://github.com/bolna-ai/bolna/issues) which can be good ones to start with
2. If you have suggestions for enhancements, wish to contribute a simple fix such as correcting a typo, or want to address an apparent bug, please feel free to initiate a new issue or submit a pull request
2. If you're contemplating a larger change or addition to this repository, be it in terms of its structure or the features, kindly begin by creating a new issue [open a new issue :octocat:](https://github.com/bolna-ai/bolna/issues/new) and outline your proposed changes. This will allow us to engage in a discussion before you dedicate a significant amount of time or effort. Your cooperation and understanding are appreciated