{"id":25725015,"url":"https://github.com/sergiopaniego/rag_local_tutorial","last_synced_at":"2025-05-07T06:20:31.195Z","repository":{"id":232853426,"uuid":"785332156","full_name":"sergiopaniego/RAG_local_tutorial","owner":"sergiopaniego","description":"Simple RAG tutorials that can be run locally or using Google Colab (only Pro version).","archived":false,"fork":false,"pushed_at":"2024-07-22T17:39:56.000Z","size":35886,"stargazers_count":21,"open_issues_count":0,"forks_count":9,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-03-31T07:11:10.967Z","etag":null,"topics":["google-colab","langchain","llama-index","llm","ollama","open-source","rag","tutorial","whisper"],"latest_commit_sha":null,"homepage":"","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/sergiopaniego.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-04-11T17:12:29.000Z","updated_at":"2025-03-21T22:10:45.000Z","dependencies_parsed_at":"2024-06-07T15:04:31.428Z","dependency_job_id":"3b81fccc-7c26-4bcb-a4e8-50999609fe5a","html_url":"https://github.com/sergiopaniego/RAG_local_tutorial","commit_stats":null,"previous_names":["sergiopaniego/rag_local_tutorial"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sergiopaniego%2FRAG_local_tutorial","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sergiopaniego%2FRAG_local_tutorial/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sergiopaniego%2FRAG_local_tutorial/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sergiopaniego%2FRAG_local_tutorial/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/sergiopaniego","download_url":"https://codeload.github.com/sergiopaniego/RAG_local_tutorial/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252824788,"owners_count":21809839,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["google-colab","langchain","llama-index","llm","ollama","open-source","rag","tutorial","whisper"],"created_at":"2025-02-25T22:17:39.080Z","updated_at":"2025-05-07T06:20:31.166Z","avatar_url":"https://github.com/sergiopaniego.png","language":"Jupyter Notebook","readme":"# Tutorials for RAG usage with an LLM locally or in Google Colab\n\nSimple RAG tutorials that can be run locally with an LLM or using Google Colab (only Pro version).\n\nThese notebooks can be executed locally or in Google Colab. \nEither way, you have to install Ollama to run it.\n\n\u003cimg src=\"./imgs/rag_diagram.png\" alt=\"RAG diagram\"/\u003e\n\n# Tutorials\n\n* [Extracting details from a file (PDF) using RAG](./example_rag.ipynb) \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/sergiopaniego/RAG_local_tutorial/blob/main/example_rag.ipynb\"\u003e\n  \u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/\u003e\n\u003c/a\u003e\n\n* [Extracting details from a YouTube video using RAG](./youtube_rag.ipynb) \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/sergiopaniego/RAG_local_tutorial/blob/main/youtube_rag.ipynb\"\u003e\n  \u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/\u003e\n\u003c/a\u003e\n\n* [Extracting details from an audio using RAG](./whisper_rag.ipynb) \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/sergiopaniego/RAG_local_tutorial/blob/main/whisper_rag.ipynb\"\u003e\n  \u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/\u003e\n\u003c/a\u003e\n\n* [Extracting details from a GitHub repo using RAG](./github_repo_rag.ipynb) \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/sergiopaniego/RAG_local_tutorial/blob/main/github_repo_rag.ipynb\"\u003e\n  \u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/\u003e\n\u003c/a\u003e\n\n# Technologies used\n\nFor these tutorials, we use LangChain, LlamaIndex, and HuggingFace for generating the RAG application code, Ollama for serving the LLM model, and a Jupyter or Google Colab notebook.\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://cdn.analyticsvidhya.com/wp-content/uploads/2023/07/langchain3.png\" alt=\"Langchain Logo\" width=\"15%\"\u003e\n  \u003cimg src=\"https://images.contentstack.io/v3/assets/bltac01ee6daa3a1e14/blt45d9c451c9a70269/6542d10b8b3f8e001b7aeead/img_blog_image_inline.png?width=1120\u0026disable=upscale\u0026auto=webp\" alt=\"LlamaIndex Logo\" width=\"15%\"\u003e\n  \u003cimg src=\"https://huggingface.co/datasets/huggingface/brand-assets/resolve/main/hf-logo-with-title.png\" alt=\"HuggingFace Logo\" width=\"15%\"\u003e\n  \u003cimg src=\"https://bookface-images.s3.amazonaws.com/logos/ee60f430e8cb6ae769306860a9c03b2672e0eaf2.png\" alt=\"Ollama Logo\" width=\"15%\"\u003e\n  \u003cimg src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png\" alt=\"Jupyter Logo\" width=\"15%\"\u003e\n  \u003cimg src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/d/d0/Google_Colaboratory_SVG_Logo.svg/1280px-Google_Colaboratory_SVG_Logo.svg.png\" alt=\"Google Colab Logo\" width=\"15%\"\u003e\n\u003c/p\u003e\n\n\n# Intructions to run the example locally\n\n* Download and install Ollama: \n\nGo to this URL and install it: https://ollama.com/download\n\n* Pull the LLM model. In this case, llama3:\n\n```\nollama pull llama3\n```\n\nMore details about llama3 in the [official release blog](https://llama.meta.com/llama3/) and in [Ollama documentation](https://ollama.com/library/llama3).\n\n# Intructions to run the example using Google Colab (Pro account needed)\n\n* Install Ollama from the command line:\n\n(Press the button on the bottom-left part of the notebook to open a Terminal)\n\n```\ncurl -fsSL https://ollama.com/install.sh | sh\n```\n\n* Pull the LLM model. In this case, llama3\n\n```\nollama serve \u0026 ollama pull llama3\n```\n\n* Serve the model locally so the code can access it.\n\n```\nollama serve \u0026 ollama run llama3\n```\n\n\nIf an error is raised related to docarray, refer to this solution: https://stackoverflow.com/questions/76880224/error-using-using-docarrayinmemorysearch-in-langchain-could-not-import-docarray\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsergiopaniego%2Frag_local_tutorial","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsergiopaniego%2Frag_local_tutorial","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsergiopaniego%2Frag_local_tutorial/lists"}