Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/apocas/restai
RestAI is an AIaaS (AI as a Service) open-source platform. Built on top of LlamaIndex, Ollama and HF Pipelines. Supports any public LLM supported by LlamaIndex and any local LLM suported by Ollama. Precise embeddings usage and tuning.
https://github.com/apocas/restai
embeddings fastapi langchain llama llamaindex llava llm ollama openai openaiapi python rag stable-diffusion transformers
Last synced: 29 days ago
JSON representation
RestAI is an AIaaS (AI as a Service) open-source platform. Built on top of LlamaIndex, Ollama and HF Pipelines. Supports any public LLM supported by LlamaIndex and any local LLM suported by Ollama. Precise embeddings usage and tuning.
- Host: GitHub
- URL: https://github.com/apocas/restai
- Owner: apocas
- License: apache-2.0
- Created: 2023-05-18T22:27:33.000Z (over 1 year ago)
- Default Branch: master
- Last Pushed: 2024-04-13T12:26:18.000Z (7 months ago)
- Last Synced: 2024-04-14T02:24:21.851Z (7 months ago)
- Topics: embeddings, fastapi, langchain, llama, llamaindex, llava, llm, ollama, openai, openaiapi, python, rag, stable-diffusion, transformers
- Language: Python
- Homepage: https://apocas.github.io/restai/
- Size: 32.2 MB
- Stars: 289
- Watchers: 8
- Forks: 53
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome - apocas/restai - RESTai is an AIaaS (AI as a Service) open-source platform. Built on top of LlamaIndex & Langchain. Supports any public LLM supported by LlamaIndex and any local LLM suported by Ollama/vLLM/etc. Precise embeddings usage and tuning. Image generation (Dall-E, SD, Flux). (Python)
README
RestAI
AIaaS (AI as a Service) for everyone. Create AI projects and consume them using a simple REST API.
Demo: https://ai.ince.pt Username:demo
Password:demo
## Features
- **Projects**: There are multiple types of agents (projects), each with its own features. ([rag](https://github.com/apocas/restai?tab=readme-ov-file#rag), [ragsql](https://github.com/apocas/restai?tab=readme-ov-file#ragsql), [inference](https://github.com/apocas/restai?tab=readme-ov-file#inference), [vision](https://github.com/apocas/restai?tab=readme-ov-file#vision), [router](https://github.com/apocas/restai?tab=readme-ov-file#router), [agent](https://github.com/apocas/restai?tab=readme-ov-file#agent))
- **Users**: A user represents a user of the system. It's used for authentication and authorization (basic auth). Each user may have access to multiple projects.
- **LLMs**: Supports any public LLM supported by LlamaIndex. Which includes any local LLM supported by Ollama, LiteLLM, etc.
- **VRAM**: Automatic VRAM management. RestAI will manage the VRAM usage, automatically loading and unloading models as needed and requested.
- **API**: The API is a first-class citizen of RestAI. All endpoints are documented using [Swagger](https://apocas.github.io/restai/).
- **Frontend**: There is a frontend available at [restai-frontend](https://github.com/apocas/restai-frontend)## Project Types
### RAG
- **Embeddings**: You may use any embeddings model supported by llamaindex. Check embeddings [definition](modules/embeddings.py).
- **Vectorstore**: There are two vectorstores supported: `Chroma` and `Redis`
- **Retrieval**: It features an embeddings search and score evaluator, which allows you to evaluate the quality of your embeddings and simulate the RAG process before the LLM. Reranking is also supported, ColBERT and LLM based.
- **Loaders**: You may use any loader supported by llamaindex.
- **Sandboxed mode**: RAG agents (projects) have "sandboxed" mode, which means that a locked default answer will be given when there aren't embeddings for the provided question. This is useful for chatbots, where you want to provide a default answer when the LLM doesn't know how to answer the question, reduncing hallucination.
- **Evaluation**: You may evaluate your RAG agent using [deepeval](https://github.com/confident-ai/deepeval). Using the `eval` property in the RAG endpoint.### RAGSQL
- **Connection**: Supply a MySQL or PostgreSQL connection string and it will automatically crawl the DB schema, using table and column names it’s able to figure out how to translate the question to sql and then write a response.
### Agent
- ReAct Agents, specify which tools to use in the project and the agent will figure out how to use them to achieve the objective.
- New tools are easily added. Just create a new tool in the app/llms/tools folder and it will be automatically picked up by Restai.- **Tools**: Supply all the tools names you want the Agent to use in this project. (separated by commas)
### Inference
### Vision
- **text2img**: RestAI supports local Stable Diffusion and Dall-E. It features prompt boosting, a LLM is internally used to boost the user prompt with more detail.
- **img2text**: RestAI supports LLaVA, BakLLaVA by default.
- **img2img**: RestAI supports InstantID.#### Stable Diffusion & [InstantID](https://github.com/InstantID/InstantID)
#### LLaVA
### Router
- Routes a message to the most suitable project. It's useful when you have multiple projects and you want to route the question to the most suitable one.
- **Routes**: Very similar to Zero Shot React strategy, but each route is a project. The router will route the question to the project that has the highest score. It's useful when you have multiple projects and you want to route the question to the most suitable one.
## LLMs
- You may use any LLM supported by Ollama and/or LlamaIndex.
## Installation
- RestAI uses [Poetry](https://python-poetry.org/) to manage dependencies. Install it with `pip install poetry`.
## Development
- `make install`
- `make dev` (starts restai in development mode)## Production
- `make install`
- `make start`## Docker
- Edit the .env file accordingly
- `docker compose --env-file .env up --build`You can specify profiles `docker compose --profile redis --profile mysql ....` to include additional components like the redis cache backend or a DB server, here are the supported profiles:
- `--profile redis` Starts and sets redis as the cache backend
- `--profile mysql` Starts and enables Mysql as the database server
- `--profile postgres` Starts and enables Postgres as the database serverThe variables MYSQL_HOST and POSTGRES_HOST should match the names of the respective services "mysql" and "postgres" and not localhost or 127.0.0.1 when using the containers
To delete everything or a specific container don't forget to pass the necessary profiles to the compose command, EX:
- Removing everything
`docker compose --profile mysql --profile postgres down --rmi all`
- Removing singular database volume
`docker compose --profile mysql down --volumes`*Note: the local_cache volume will also get removed since it's in the main service and not in any profile*
## API
- **Endpoints**: All the API endpoints are documented and available at: [Endpoints](https://apocas.github.io/restai/api.html)
- **Swagger**: Swagger/OpenAPI documentation: [Swagger](https://apocas.github.io/restai/swagger/)## Frontend
- Source code at [https://github.com/apocas/restai-frontend](https://github.com/apocas/restai-frontend).
- `make install` automatically installs the frontend.## Tests
- Tests are implemented using `pytest`. Run them with `make test`.
## License
Pedro Dias - [@pedromdias](https://twitter.com/pedromdias)
Licensed under the Apache license, version 2.0 (the "license"); You may not use this file except in compliance with the license. You may obtain a copy of the license at:
http://www.apache.org/licenses/LICENSE-2.0.html
Unless required by applicable law or agreed to in writing, software distributed under the license is distributed on an "as is" basis, without warranties or conditions of any kind, either express or implied. See the license for the specific language governing permissions and limitations under the license.