https://github.com/aryanvbw/private-ai
Private-AI is an innovative AI project designed for asking questions about your documents using powerful Large Language Models (LLMs). The unique feature? It works offline, ensuring 100% privacy with no data leaving your environment
https://github.com/aryanvbw/private-ai
aryanvbw chatgpt chatgpt-4 chatgpt3 chatgpt4 gpt4 gpt4all privategpt vivek vivek-w
Last synced: 6 months ago
JSON representation
Private-AI is an innovative AI project designed for asking questions about your documents using powerful Large Language Models (LLMs). The unique feature? It works offline, ensuring 100% privacy with no data leaving your environment
- Host: GitHub
- URL: https://github.com/aryanvbw/private-ai
- Owner: AryanVBW
- License: apache-2.0
- Created: 2023-12-08T01:48:21.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-02-26T22:02:25.000Z (over 1 year ago)
- Last Synced: 2025-03-22T22:05:17.708Z (7 months ago)
- Topics: aryanvbw, chatgpt, chatgpt-4, chatgpt3, chatgpt4, gpt4, gpt4all, privategpt, vivek, vivek-w
- Language: Python
- Homepage: https://aryanvbw.github.io/Private-Ai/
- Size: 11.8 MB
- Stars: 18
- Watchers: 1
- Forks: 6
- Open Issues: 16
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- License: LICENSE
- Citation: CITATION.cff
Awesome Lists containing this project
README
# π Welcome to Private-AI!
Private-AI is an innovative AI project designed for asking questions about your documents using powerful Large Language Models (LLMs). The unique feature? It works offline, ensuring 100% privacy with no data leaving your environment.
## π What does Private-AI offer?
- **High-level API:** Abstracts the complexity of a Retrieval Augmented Generation (RAG) pipeline. Handles document ingestion, chat, and completions.
- **Low-level API:** For advanced users to implement custom pipelines. Includes features like embeddings generation and contextual chunks retrieval.
## π Why Private-AI?Privacy is the key motivator! Private-AI addresses concerns in data-sensitive domains like healthcare and legal, ensuring your data stays under your control.
# π€ installation
---
**Private-Ai Installation Guide**
- Install Python 3.11 (or 3.12)
- Using apt(Debian base linux like-kali,Ubantu etc. )
```bash
sudo apt-get install python3.11
apt install python3.11-venv
```
- Using pyenv:
```bash
pyenv install 3.11
pyenv local 3.11
```- Install [Poetry](https://python-poetry.org/docs/#installing-with-pipx) for dependency management.
```bash
sudo apt install python3-poetry
sudo apt install python3-pytest
```
### Installation Whithout GPU:
- Git clone Private-Ai repository:
```bash
git clone https://github.com/AryanVBW/Private-Ai
cd Private-Ai && \
python3.11 -m venv .venv && source .venv/bin/activate && \
pip install --upgrade pip poetry && poetry install --with ui,local && ./scripts/setup
python3.11 -m private_gpt
```### Run of private Ai:
- forRunAgain jutsGoTo Private Ai directoy anr run following comand:```bash
make run
```
# ππAll Done ππ## For GPU utilization and customization, follow the steps below:
- For Private-Ai to run fully locally GPU acceleration is required (CPU execution is possible, but very slow)
### clone repo
- Git clone Private-Ai repository:
```bash
git clone https://github.com/AryanVBW/Private-Ai
cd Private-Ai
```
### Dependencies Installation:
- Install make (OSX: `brew install make`, Windows: `choco install make`).
- Install dependencies:
```bash
poetry install --with ui
```
### Local LLM Setup:
- Install extra dependencies for local execution:
```bash
poetry install --with local
```
- Use the setup script to download embedding and LLM models:
```bash
poetry run python scripts/setup
```
### Finalize:
- Installation of private Ai:
```bash
make
```
### Verification and run :
- Run `make run` or `poetry run python -m private_gpt`.
- Open http://localhost:8001 to see Gradio UI with a mock LLM echoing input.### Customization:
- Customize low-level parameters in `private_gpt/components/llm/llm_component.py`.
- Configure LLM options in `settings.yaml`.### GPU Support:
- **OSX**: Build llama.cpp with Metal support.
```bash
CMAKE_ARGS="-DLLAMA_METAL=on" pip install --force-reinstall --no-cache-dir llama-cpp-python
```- **Windows NVIDIA GPU**: Install [VS2022](https://visualstudio.microsoft.com/vs/community/), [CUDA toolkit](https://developer.nvidia.com/cuda-downloads), and run:
```powershell
$env:CMAKE_ARGS='-DLLAMA_CUBLAS=on'; poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python
```- **Linux NVIDIA GPU and Windows-WSL**: Install [CUDA toolkit](https://developer.nvidia.com/cuda-downloads) and run:
```bash
CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python
```### Troubleshooting:
- Check GPU support and dependencies for your platform.
- For C++ compiler issues, follow troubleshooting steps.**Note**: If any issues, retry in verbose mode with `-vvv` during installations.
**Troubleshooting C++ Compiler**:
- **Windows 10/11**: Install Visual Studio 2022 and MinGW.
- **OSX**: Ensure Xcode is installed or install clang/gcc with Homebrew.---
## π§© Architecture Highlights:
- **FastAPI-Based API:** Follows the OpenAI API standard, making it easy to integrate.
- **LlamaIndex Integration:** Leverages LlamaIndex for the RAG pipeline, providing flexibility and extensibility.- **Present and Future:** Evolving into a gateway for generative AI models and primitives. Stay tuned for exciting new features!
## π‘ How to Contribute?
Contributions are welcome! Check the ProjectBoard for ideas. Ensure code quality with format and typing checks (run `make check`).
## π€Supporters:
Supported by Qdrant, Fern, and LlamaIndex. Influenced by projects like LangChain, GPT4All, LlamaCpp, Chroma, and SentenceTransformers.
π Thank you for contributing to the future of private and powerful AI with Private-AI!
π **License:** Apache-2.0
# Copyright Notice
This is a modified version of [PrivateGPT](https://github.com/imartinez/privateGPT). All rights and licenses belong to the PrivateGPT team.Β© 2023 PrivateGPT Developers. All rights reserved.