Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/anasaber/mlflow_with_rag
Using MLflow to track a RAG pipeline, using LLamaIndex and Ollama/HuggingfaceLLMs
https://github.com/anasaber/mlflow_with_rag
cicd deployment evaluation-metrics llamaindex llamaindex-rag mlflow mlflow-deployement mlflow-projects mlflow-tracking mlflow-tracking-server mlflow-ui mlops mlops-project mlops-template rag rag-evaluation rag-pipeline
Last synced: 7 days ago
JSON representation
Using MLflow to track a RAG pipeline, using LLamaIndex and Ollama/HuggingfaceLLMs
- Host: GitHub
- URL: https://github.com/anasaber/mlflow_with_rag
- Owner: AnasAber
- Created: 2024-12-09T11:21:46.000Z (17 days ago)
- Default Branch: master
- Last Pushed: 2024-12-18T15:06:33.000Z (8 days ago)
- Last Synced: 2024-12-18T16:22:42.726Z (8 days ago)
- Topics: cicd, deployment, evaluation-metrics, llamaindex, llamaindex-rag, mlflow, mlflow-deployement, mlflow-projects, mlflow-tracking, mlflow-tracking-server, mlflow-ui, mlops, mlops-project, mlops-template, rag, rag-evaluation, rag-pipeline
- Language: Python
- Homepage:
- Size: 60.3 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
### MLflow Deployement of a RAG pipeline 🥀
This project is for people that want to deploy a RAG pipeline using MLflow.
The project uses:
- `LlamaIndex` and `langchain` as orchestrators
- `Ollama` and `HuggingfaceLLMs`
- `MLflow` as an MLOps framework for deploying and tracking![Project Overview Diagram](images/mlflow_rag_schema.png)
### How to start1. Clone the repository
```bash
git clone https://github.com/AnasAber/RAG_in_CPU.git
```2. Install the dependencies
```bash
pip install -r requirements.txt
```
Make sure to put your api_keys into the `example.env`, and rename it to `.env`3. Notebook Prep:
- Put your own data files in the data/ folder
- Go to the notebook, and replace "api_key_here" with your huggingface_api_key
- If you have GPU, you're fine, if not, run it on google colab, and make sure to download the json file output at the end of the run.4. Go to `deployement` folder, and open two terminals:
```bash
python workflow.py
```
And after the run, go to your mlflow run, and pick the run ID:
![Run ID](images/run_id.png)
Place it into this command:
```bash
mlflow models serve -m runs://rag_deployement -p 5001
```
In the other terminal, make sure to run
```bash
app.py
```
5. Open another terminal, and move to the `frontend` folder, and run:
```bash
npm start
```Now, you should be seeing a web interface, and the two terminals are running.
![Interface](images/interface.png)If you got errors, try to see what's missing in the requirements.txt.
Enjoy!