{"id":28731841,"url":"https://github.com/eziodevio/ai-knowledge-bot","last_synced_at":"2026-04-11T14:32:57.881Z","repository":{"id":298443562,"uuid":"999932361","full_name":"EzioDEVio/ai-knowledge-bot","owner":"EzioDEVio","description":"his is my own custom-built offline AI bot that lets you chat with PDFs and web pages using **local embeddings** and **local LLMs** like LLaMA 3.  I built it step by step using LangChain, FAISS, HuggingFace, and Ollama — without relying on OpenAI or DeepSeek APIs anymore (they just kept failing or costing too much)","archived":false,"fork":false,"pushed_at":"2025-06-11T05:18:31.000Z","size":30,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-06-11T06:24:57.701Z","etag":null,"topics":["ai-chatbot","chat-with-pdf","chat-with-webpage","document-summarization","eziodevio","faiss","huggingface-embeddings","langchain","llama3","local-llm","offline-ai","ollama","rag","streamlit","vectorstore"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/EzioDEVio.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-06-11T02:45:09.000Z","updated_at":"2025-06-11T05:32:20.000Z","dependencies_parsed_at":"2025-06-11T06:35:05.652Z","dependency_job_id":null,"html_url":"https://github.com/EzioDEVio/ai-knowledge-bot","commit_stats":null,"previous_names":["eziodevio/ai-knowledge-bot"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/EzioDEVio/ai-knowledge-bot","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EzioDEVio%2Fai-knowledge-bot","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EzioDEVio%2Fai-knowledge-bot/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EzioDEVio%2Fai-knowledge-bot/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EzioDEVio%2Fai-knowledge-bot/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/EzioDEVio","download_url":"https://codeload.github.com/EzioDEVio/ai-knowledge-bot/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EzioDEVio%2Fai-knowledge-bot/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":260036761,"owners_count":22949266,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai-chatbot","chat-with-pdf","chat-with-webpage","document-summarization","eziodevio","faiss","huggingface-embeddings","langchain","llama3","local-llm","offline-ai","ollama","rag","streamlit","vectorstore"],"created_at":"2025-06-15T19:11:23.837Z","updated_at":"2025-12-30T22:32:57.418Z","avatar_url":"https://github.com/EzioDEVio.png","language":"Python","readme":"![MIT License](https://img.shields.io/badge/license-MIT-blue)\n![Built with LangChain](https://img.shields.io/badge/Built%20with-LangChain-4b7bec)\n![Offline AI](https://img.shields.io/badge/LLM-Ollama-green)\n![last commit](https://img.shields.io/github/last-commit/EzioDEVio/ai-knowledge-bot?color=blue)\n![repo size](https://img.shields.io/github/repo-size/EzioDEVio/ai-knowledge-bot)\n![GitHub issues](https://img.shields.io/github/issues/EzioDEVio/ai-knowledge-bot)\n![Forks](https://img.shields.io/github/forks/EzioDEVio/ai-knowledge-bot?style=social)\n![Stars](https://img.shields.io/github/stars/EzioDEVio/ai-knowledge-bot?style=social)\n![PRs](https://img.shields.io/github/issues-pr/EzioDEVio/ai-knowledge-bot)\n\n# 🧠 AI Knowledge Bot\n\nThis is my own custom-built offline AI bot that lets you chat with PDFs and web pages using **local embeddings** and **local LLMs** like LLaMA 3.\n\nI built it step by step using LangChain, FAISS, HuggingFace, and Ollama — without relying on OpenAI or DeepSeek APIs anymore (they just kept failing or costing too much).\n\n---\n\n## 🚀 Features\n\n- 📄 Chat with uploaded PDF files\n- 🌍 Ask questions about a webpage URL\n- 🧠 Uses local HuggingFace embeddings (`all-MiniLM-L6-v2`)\n- 🦙 Powered by Ollama + LLaMA 3 (fully offline LLM)\n- 🗃️ Built-in FAISS vectorstore\n- 🧾 PDF inline preview\n- 🧮 Built-in calculator + summarizer tools (via LangChain agents)\n- 🧠 Page citation support (know where each answer came from)\n- 📜 Chat history viewer with download button (JSON)\n- 🎛️ Simple Streamlit UI with dark/light mode toggle\n- 👨‍💻 Footer credit: *Developed by EzioDEVio*\n\n---\n\n## 📦 Tech Stack\n\n- `langchain`, `langchain-community`\n- `sentence-transformers` for local embeddings\n- `ollama` for local LLMs (`llama3`)\n- `PyPDF2` for PDF parsing\n- `FAISS` for vector indexing\n- `Streamlit` for frontend\n\n---\n\n## 🛠 Setup Guide\n\n### 1. Clone this repo\n\n```bash\ngit clone https://github.com/EzioDEVio/ai-knowledge-bot.git\ncd ai-knowledge-bot\n````\n\n---\n\n### 2. Create and activate virtualenv (optional but recommended)\n\n```bash\npython -m venv venv\n.\\venv\\Scripts\\activate  # Windows for Mac is different\n```\n\n---\n\n### 3. Install dependencies\n\n```bash\npip install -r requirements.txt\n```\n\nMake sure `sentence-transformers` is installed — needed for local embeddings.\n\n---\n\n### 4. Install Ollama (for local LLM)\n\nDownload and install from:\n\n👉 [https://ollama.com/download](https://ollama.com/download)\n\nAfter installation, verify:\n\n```bash\nollama --version\n```\n\nThen pull and run the model:\n\n```bash\nollama run llama3\n```\n\n\u003e This will download the LLaMA 3 model (approx. 4–8GB). You can also try `mistral`, `codellama`, etc.\n\n---\n\n### 5. Run the app\n\n```bash\nstreamlit run app.py\n```\n\nThe app will open at:\n\n```\nhttp://localhost:8501\n```\n\n---\n\n## 📁 Folder Structure\n\n```\nai-knowledge-bot/\n├── app.py                     # Main Streamlit UI\n├── backend/\n│   ├── pdf_loader.py          # PDF text extraction\n│   ├── web_loader.py          # Webpage scraper\n│   ├── vector_store.py        # Embedding + FAISS\n│   └── qa_chain.py            # LLM QA logic (Ollama + tools)\n├── .env                       # Not used anymore (was for API keys)\n├── requirements.txt\n└── README.md\n```\n\n---\n\n## ✅ Working Setup Summary\n\n| Component        | Mode                                 |\n| ---------------- | ------------------------------------ |\n| Embeddings       | Local (`HuggingFace`)                |\n| Vectorstore      | Local (`FAISS`)                      |\n| LLM Response     | Local (`Ollama` + `llama3`)          |\n| Internet Needed? | ❌ Only for first-time model download |\n\n---\n\n## ⚠️ Why I Avoided OpenAI / DeepSeek\n\n* **OpenAI** failed with `RateLimitError` and quota issues unless I added billing.\n* **DeepSeek** embedding endpoints didn’t work — only chat models supported.\n\nSo I switched to:\n\n* 🔁 Local `HuggingFaceEmbeddings` for vectorization\n* 🦙 `ChatOllama` for full offline AI answers\n\n---\n\n## ✅ Now Completed Features\n\n* ✅ PDF upload + preview\n* ✅ URL content QA\n* ✅ Chat history with page citations\n* ✅ Calculator + summarizer tools\n* ✅ Footer attribution\n* ✅ JSON export\n* ✅ 100% offline functionality\n\n\n---\n\n## 🐳 Run with Docker (Secure Production Mode)\n\nBuild and run the app securely using a **multi-stage Dockerfile**:\n\n 1. Build the container\n\n```bash\ndocker build -t ai-knowledge-bot .\n```\n\n\n2. Run the container\nMake sure Ollama is running on the host, open up a powershell or in different terminal then:\n```\ndocker run -p 8501:8501 \\\n  --add-host=host.docker.internal:host-gateway \\\n  ai-knowledge-bot\n```\n---\n## 🔐 Dockerfile Security Highlights\n✅ Multi-stage build (separates dependencies from runtime)\n\n✅ Minimal base (python:3.10-slim)\n\n✅ Non-root appuser by default\n\n✅ .env, venv, logs excluded via .dockerignore\n\n✅ Exposes only necessary port (8501)\n\n✅ Automatically starts Streamlit app\n\n---\n## 💬 License\n\nMIT — feel free to fork, use, or improve it.\n\n---\n\n## 🔥 Built by EzioDEVio | 🇮🇶 | 🧠\n\nFrom concept to offline AI — all step by step.\n\n---\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Feziodevio%2Fai-knowledge-bot","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Feziodevio%2Fai-knowledge-bot","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Feziodevio%2Fai-knowledge-bot/lists"}