https://github.com/sergiopaniego/rag_local_tutorial
Simple RAG tutorials that can be run locally or using Google Colab (only Pro version).
https://github.com/sergiopaniego/rag_local_tutorial
google-colab langchain llama-index llm ollama open-source rag tutorial whisper
Last synced: 9 months ago
JSON representation
Simple RAG tutorials that can be run locally or using Google Colab (only Pro version).
- Host: GitHub
- URL: https://github.com/sergiopaniego/rag_local_tutorial
- Owner: sergiopaniego
- Created: 2024-04-11T17:12:29.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-07-22T17:39:56.000Z (over 1 year ago)
- Last Synced: 2025-03-31T07:11:10.967Z (10 months ago)
- Topics: google-colab, langchain, llama-index, llm, ollama, open-source, rag, tutorial, whisper
- Language: Jupyter Notebook
- Homepage:
- Size: 34.2 MB
- Stars: 21
- Watchers: 1
- Forks: 9
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Tutorials for RAG usage with an LLM locally or in Google Colab
Simple RAG tutorials that can be run locally with an LLM or using Google Colab (only Pro version).
These notebooks can be executed locally or in Google Colab.
Either way, you have to install Ollama to run it.

# Tutorials
* [Extracting details from a file (PDF) using RAG](./example_rag.ipynb)
* [Extracting details from a YouTube video using RAG](./youtube_rag.ipynb)
* [Extracting details from an audio using RAG](./whisper_rag.ipynb)
* [Extracting details from a GitHub repo using RAG](./github_repo_rag.ipynb)
# Technologies used
For these tutorials, we use LangChain, LlamaIndex, and HuggingFace for generating the RAG application code, Ollama for serving the LLM model, and a Jupyter or Google Colab notebook.
# Intructions to run the example locally
* Download and install Ollama:
Go to this URL and install it: https://ollama.com/download
* Pull the LLM model. In this case, llama3:
```
ollama pull llama3
```
More details about llama3 in the [official release blog](https://llama.meta.com/llama3/) and in [Ollama documentation](https://ollama.com/library/llama3).
# Intructions to run the example using Google Colab (Pro account needed)
* Install Ollama from the command line:
(Press the button on the bottom-left part of the notebook to open a Terminal)
```
curl -fsSL https://ollama.com/install.sh | sh
```
* Pull the LLM model. In this case, llama3
```
ollama serve & ollama pull llama3
```
* Serve the model locally so the code can access it.
```
ollama serve & ollama run llama3
```
If an error is raised related to docarray, refer to this solution: https://stackoverflow.com/questions/76880224/error-using-using-docarrayinmemorysearch-in-langchain-could-not-import-docarray