Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/victordibia/autogen-ui
Web UI for AutoGen (A Framework Multi-Agent LLM Applications)
https://github.com/victordibia/autogen-ui
agent-based-framework ai ai-agents autogen chatgpt deep-learning visualization
Last synced: about 15 hours ago
JSON representation
Web UI for AutoGen (A Framework Multi-Agent LLM Applications)
- Host: GitHub
- URL: https://github.com/victordibia/autogen-ui
- Owner: victordibia
- License: mit
- Created: 2023-10-05T23:28:44.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-10-31T04:53:05.000Z (10 days ago)
- Last Synced: 2024-10-31T05:24:14.813Z (10 days ago)
- Topics: agent-based-framework, ai, ai-agents, autogen, chatgpt, deep-learning, visualization
- Language: TypeScript
- Homepage:
- Size: 1.65 MB
- Stars: 741
- Watchers: 21
- Forks: 111
- Open Issues: 14
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-rainmana - victordibia/autogen-ui - Web UI for AutoGen (A Framework Multi-Agent LLM Applications) (TypeScript)
- awesome-chatgpt - victordibia/autogen-ui - AutoGen UI is a browser extension that provides a web-based user interface for AutoGen, a framework for developing LLM applications using multiple ChatGPT-like agents. It allows regular users to interact with the agents, prototype, test, and debug agent flows, and inspect agent behaviors and outcomes. (SDK, Libraries, Frameworks / Other sdk/libraries)
README
# AutoGen UI
![AutoGen UI Screenshot](docs/images/autogenuiscreen.png)
Experimental UI for working with [AutoGen](https://github.com/microsoft/autogen) agents, based on the [AutoGen](https://github.com/microsoft/autogen) library. The UI is built using Next.js and web apis built using FastApi.
## Why AutoGen UI?
AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve complex tasks. A UI can help in the development of such applications by enabling rapid prototyping and testing and debugging of agents/agent flows (defining, composing etc) inspecting agent behaviors, and agent outcomes.
> **Note:** This is early work in progress.
Note that you will have to setup your OPENAI_API_KEY or general llm config using an environment variable.
Also See this article for how Autogen supports multiple [llm providers](https://microsoft.github.io/autogen/docs/FAQ/#set-your-api-endpoints)```bash
export OPENAI_API_KEY=
```## Getting Started
Install dependencies. Python 3.9+ is required. You can install from pypi using pip.
```bash
pip install autogenui .
```or to install from source
```bash
git clone [email protected]:victordibia/autogen-ui.git
cd autogenui
pip install -e .
```Run ui server.
Set env vars `OPENAI_API_KEY` and `NEXT_PUBLIC_API_SERVER`.
```bash
export OPENAI_API_KEY=
``````bash
autogenui # or with --port 8081
```Open http://localhost:8081 in your browser.
To modify the source files, make changes in the frontend source files and run `npm run build` to rebuild the frontend.
## Development
## backend - with hot-reload
```bash
autogenui --reload
```note: the UI loaded by this CLI in a pre-complied version by running the frontend build command show blow. That means if you make changes the frontend code or change the hostname or port the backend is running on the frontend updated frontend code needs to be rebuilt for it to load through this command.
## Frontend
```bash
cd frontend
```Install dependencies
```bash
yarn install
```Run in dev mode - with hot-reload
```bash
export NEXT_PUBLIC_API_SERVER=http:///api
```your_backend_hostname - is the hostname that autogenui is running on e.g. `localhost:8081`
```bash
yarn dev
```(Re)build
Remember to install dependencies and set `NEXT_PUBLIC_API_SERVER` before building.
```bash
yarn build
```## Roadmap
- [x] **FastApi end point for AutoGen**.
This involves setting up a FastApi endpoint that can respond to end user prompt based requests using a basic two agent format.
- [ ] **Basic Chat UI**
Front end UI with a chatbox to enable sending requests and showing responses from the end point for a basic 2 agent format.
- [ ] **Debug Tools**: enable support for useful debugging capabilities like viewing
- [x] # of agent turns per request
- [ ] define agent config (e.g. assistant agent + code agent)
- [x] append conversation history per request
- [x] display cost of interaction per request (# tokens and $ cost)
- [ ] Streaming UI
Enable streaming of agent responses to the UI. This will enable the UI to show agent responses as they are generated, instead of waiting for the entire response to be generated.
- [ ] **Flow based Playground UI**
Explore the use of a tool like React Flow to add agent nodes and compose agent flows. For example, setup an assistant agent + a code agent, click run and view output in a chat window.
- [ ] Create agent nodes
- [ ] Compose agent nodes into flows
- [ ] Run agent flows
- [ ] Explore external integrations e.g. with [Flowise](https://github.com/FlowiseAI/Flowise)## References
- [AutoGen](https://arxiv.org/abs/2308.08155).
```
@inproceedings{wu2023autogen,
title={AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework},
author={Qingyun Wu and Gagan Bansal and Jieyu Zhang and Yiran Wu and Shaokun Zhang and Erkang Zhu and Beibin Li and Li Jiang and Xiaoyun Zhang and Chi Wang},
year={2023},
eprint={2308.08155},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```