{"id":16094322,"url":"https://github.com/devoxx/devoxxgenieideaplugin","last_synced_at":"2026-02-22T11:58:42.920Z","repository":{"id":257816829,"uuid":"786568321","full_name":"devoxx/DevoxxGenieIDEAPlugin","owner":"devoxx","description":"DevoxxGenie is a plugin for IntelliJ IDEA that uses local LLM's (Ollama, LMStudio, GPT4All, Jan and Llama.cpp) and Cloud based LLMs to help review, test, explain your project code.","archived":false,"fork":false,"pushed_at":"2025-04-10T16:37:08.000Z","size":16016,"stargazers_count":426,"open_issues_count":47,"forks_count":48,"subscribers_count":10,"default_branch":"master","last_synced_at":"2025-04-13T00:41:52.176Z","etag":null,"topics":["anthropic","assistant","azure-ai","chatgpt","chatgpt-api","claude-3","claude-ai","copilot","copilot-chat","gemini","genai","gpt4all","groq","intellij-plugin","java","llm","lmstudio","mistral","ollama","openai"],"latest_commit_sha":null,"homepage":"https://devoxx.com","language":"Java","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/devoxx.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-04-14T21:08:14.000Z","updated_at":"2025-04-13T00:36:50.000Z","dependencies_parsed_at":null,"dependency_job_id":"89ce57de-8715-437b-8835-9e0b374f1dde","html_url":"https://github.com/devoxx/DevoxxGenieIDEAPlugin","commit_stats":{"total_commits":355,"total_committers":7,"mean_commits":"50.714285714285715","dds":0.05070422535211272,"last_synced_commit":"be83ea7a5c1f12622a17a8655bfdb9139b628b66"},"previous_names":["devoxx/devoxxgenieideaplugin"],"tags_count":85,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/devoxx%2FDevoxxGenieIDEAPlugin","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/devoxx%2FDevoxxGenieIDEAPlugin/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/devoxx%2FDevoxxGenieIDEAPlugin/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/devoxx%2FDevoxxGenieIDEAPlugin/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/devoxx","download_url":"https://codeload.github.com/devoxx/DevoxxGenieIDEAPlugin/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248650417,"owners_count":21139672,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["anthropic","assistant","azure-ai","chatgpt","chatgpt-api","claude-3","claude-ai","copilot","copilot-chat","gemini","genai","gpt4all","groq","intellij-plugin","java","llm","lmstudio","mistral","ollama","openai"],"created_at":"2024-10-09T17:01:49.761Z","updated_at":"2026-02-22T11:58:42.911Z","avatar_url":"https://github.com/devoxx.png","language":"Java","readme":"## DevoxxGenie \n\n\u003cimg height=\"128\" src=\"src/main/resources/icons/pluginIcon.svg\" width=\"128\"/\u003e\n\n[![X](https://img.shields.io/twitter/follow/DevoxxGenie)](https://x.com/devoxxgenie)\n![GitHub Repo stars](https://img.shields.io/github/stars/devoxx/DevoxxGenieIDEAPlugin)\n![JetBrains Plugin Rating](https://img.shields.io/jetbrains/plugin/r/stars/24169-devoxxgenie)\n[![Documentation](https://img.shields.io/badge/docs-genie.devoxx.com-blue)](https://genie.devoxx.com)\n\nDevoxx Genie is a fully Java-based LLM Code Assistant plugin for IntelliJ IDEA, designed to integrate with local LLM providers such as [Ollama](https://ollama.com/), [LMStudio](https://lmstudio.ai/), [GPT4All](https://gpt4all.io/index.html), [Llama.cpp](https://github.com/ggerganov/llama.cpp) and [Exo](https://github.com/exo-explore/exo) but also cloud based LLM's such as [OpenAI](https://openai.com), [Anthropic](https://www.anthropic.com/), [Mistral](https://mistral.ai/), [Groq](https://groq.com/), [Gemini](https://aistudio.google.com/app/apikey), [DeepInfra](https://deepinfra.com/dash/deployments), [DeepSeek](https://www.deepseek.com/), [Kimi](https://platform.moonshot.ai/), [GLM](https://open.bigmodel.cn/), [OpenRouter](https://www.openrouter.ai/), [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) and [Amazon Bedrock](https://aws.amazon.com/bedrock)\n\n**🆕 [Security Scanning](https://genie.devoxx.com/docs/features/security-scanning)** — Run **Gitleaks** (secret detection), **OpenGrep** (SAST) and **Trivy** (dependency CVEs) directly from the LLM agent. Findings are automatically created as prioritised tasks in the Spec Browser for tracking and remediation!\n\n**🆕 [Spec Driven Development (SDD)](https://genie.devoxx.com/docs/features/spec-driven-development)** — Define tasks in `Backlog.md`, browse them in the Spec Browser with Task List and Kanban Board views, then let the Agent implement them autonomously! Use the **Agent Loop** to run multiple tasks in a single batch with dependency ordering and automatic advancement.\n\n**🆕 [AI-powered Inline Code Completion](https://genie.devoxx.com/docs/features/inline-completion)** — Get context-aware code suggestions as you type using Fill-in-the-Middle (FIM) models via Ollama or LM Studio!\n\n**🆕 [ACP Runners](https://genie.devoxx.com/docs/features/acp-runners)** — Communicate with external agents (Kimi, Gemini CLI, Kilocode, Claude Code, Copilot) via the Agent Communication Protocol (JSON-RPC 2.0 over stdin/stdout) with structured streaming, conversation history, and capability negotiation!\n\n**🆕 [CLI Runners](https://genie.devoxx.com/docs/features/acp-runners)** — CLI Runners let you execute prompts and spec tasks directly from DevoxxGenie's chat interface or Spec Browser using external CLI tools like Claude Code, GitHub Copilot, Codex, Gemini CLI, and Kimi.\n\n**🆕 [Plugin Integration API](https://genie.devoxx.com/blog/devoxxgenie-plugin-integrations)** — Other IntelliJ plugins can integrate with DevoxxGenie at runtime via a reflection-based `ExternalPromptService` — no compile-time dependency required. Two real-world POCs show it in action: a [SonarLint fork](https://github.com/stephanj/sonarlint-devoxxgenie-intellij) and a [SpotBugs fork](https://github.com/stephanj/spotbugs-devoxxgenie-plugin) that each send code-quality findings to DevoxxGenie with a single click, or defer them as Backlog tasks for the SDD workflow.\n\nWith [Agent Mode](https://genie.devoxx.com/docs/features/agent-mode), MCPs and frontier models like Claude Opus 4.6, Gemini Pro, DevoxxGenie isn't just another developer tool — it's a glimpse into the future of agentic programming. One thing's clear: we're in the midst of a paradigm shift in AI-Augmented Programming (AAP) 🐒\n\nWe also support RAG-based prompt context based on your vectorized project files, Git Diff viewer, and LLM-driven web search with [Google](https://developers.google.com/custom-search) and [Tavily](https://tavily.com/).\n\n[\u003cimg width=\"200\" alt=\"Marketplace\" src=\"https://github.com/devoxx/DevoxxGenieIDEAPlugin/assets/179457/1c24d692-37ea-445d-8015-2c25f63e2f90\"\u003e](https://plugins.jetbrains.com/plugin/24169-devoxxgenie)\n45K+ Downloads\n\n## 📚 Documentation\n\n**[📖 Visit our comprehensive documentation at genie.devoxx.com](https://genie.devoxx.com)**\n\nQuick links:\n- [Getting Started](https://genie.devoxx.com/docs/intro) - Start using DevoxxGenie in minutes\n- [Installation Guide](https://genie.devoxx.com/docs/category/installation) - Local and cloud LLM setup\n- [Configuration](https://genie.devoxx.com/docs/category/configuration) - API keys, settings, and customization\n- [Features](https://genie.devoxx.com/docs/category/features) - Explore all capabilities\n- [Security Scanning](https://genie.devoxx.com/docs/features/security-scanning) - Gitleaks, OpenGrep and Trivy as LLM agent tools with auto-backlog task creation\n- [Agent Mode](https://genie.devoxx.com/docs/features/agent-mode) - Autonomous code tools with parallel sub-agents\n- [Spec Driven Development](https://genie.devoxx.com/docs/features/spec-driven-development) - Task management with Backlog.md, Kanban Board, and Agent implementation\n- [Agent Loop](https://genie.devoxx.com/docs/features/sdd-agent-loop) - Batch task execution with dependency ordering and progress tracking\n- [ACP Runners](https://genie.devoxx.com/docs/features/acp-runners) - Agent Communication Protocol integration with external agents\n- [CLI Runners](https://genie.devoxx.com/docs/features/cli-runners) - Execute prompts and spec tasks via external CLI tools\n- [Plugin Integration API](https://genie.devoxx.com/blog/devoxxgenie-plugin-integrations) - Integrate other IntelliJ plugins with DevoxxGenie at runtime\n- [Inline Code Completion](https://genie.devoxx.com/docs/features/inline-completion) - AI-powered code suggestions as you type\n- [MCP Support](https://genie.devoxx.com/docs/mcp-support) - Model Context Protocol integration\n- [RAG Setup](https://genie.devoxx.com/docs/rag) - Retrieval-Augmented Generation guide\n- [Troubleshooting](https://genie.devoxx.com/docs/troubleshooting) - Common issues and solutions\n\n### 🔒 Security Scanning\n\n**[📖 Full Security Scanning Documentation](https://genie.devoxx.com/docs/features/security-scanning)**\n\nDevoxxGenie integrates three best-in-class open-source security scanners as **LLM agent tools**. When Agent Mode is active, the LLM can invoke them on demand, interpret the results in context, and automatically create prioritised backlog tasks for every finding.\n\n| Scanner | What it detects | Install |\n|---------|----------------|---------|\n| **Gitleaks** | Hardcoded secrets, API keys, tokens | `brew install gitleaks` |\n| **OpenGrep** | SAST issues — injection flaws, insecure patterns | `brew install opengrep` |\n| **Trivy** | Dependency CVEs (SCA) | `brew install trivy` |\n\nAsk the agent: *\"Run a full security scan and create backlog tasks for everything you find.\"*\n\n\u003cimg width=\"800\" alt=\"Security scan findings as Spec Browser tasks\" src=\"docusaurus/static/img/SecurityScanner-Tasks.jpg\" /\u003e\n\nEnable in **Settings → DevoxxGenie → Security Scanning**. Each scanner has a path browser, a Test button, and install guidance. Findings are deduplicated — re-running a scan will not create duplicate tasks.\n\n### Spec Driven Development (SDD)\n\n**[📖 Full SDD Documentation](https://genie.devoxx.com/docs/features/spec-driven-development)**\n\nSpec Driven Development brings structured task management directly into your IDE. Instead of ad-hoc prompts, define your tasks in `Backlog.md` files, browse them in the **Spec Browser**, and let the Agent implement them autonomously.\n\n**How it works:**\n1. **Create tasks** — Use natural language prompts or write Backlog.md task specs manually\n2. **Browse in Specs** — View tasks in a Task List or visual Kanban Board with drag-and-drop\n3. **Implement with Agent** — Click \"Implement with Agent\" and let the AI do the work\n\n\u003cimg width=\"800\" alt=\"SDD Task List\" src=\"docusaurus/static/img/SDD-TaskList.png\" /\u003e\n\nThe Kanban Board gives you a visual overview of task status with drag-and-drop support:\n\n\u003cimg width=\"800\" alt=\"SDD Kanban Board\" src=\"docusaurus/static/img/SDD-Kanban.png\" /\u003e\n\n17 built-in backlog tools provide full CRUD operations on tasks, documents, and milestones — all accessible to the LLM agent for autonomous project management.\n\n#### Agent Loop — Batch Task Execution\n\nSelect multiple tasks (or click \"Run All To Do\") and the **Agent Loop** executes them sequentially in a single batch. Tasks are automatically sorted by dependencies using topological ordering, and each task gets a fresh conversation. The agent implements each task autonomously, and when it marks a task as Done the runner advances to the next one — with progress tracking and notifications throughout.\n\n\u003cimg width=\"993\" height=\"278\" alt=\"weather-tasks-started\" src=\"https://github.com/user-attachments/assets/d27617df-a403-4a27-bc63-31be9eb700d0\" /\u003e\n\n**[📖 Agent Loop Documentation](https://genie.devoxx.com/docs/features/sdd-agent-loop)**\n\n### Agentic Programming with DevoxxGenie\n\n#### Unlocking AI Coding Assistants: Real World Use Cases by Gunter Rotsaert\n[![DevoxxGenie Deep Dive](https://github.com/user-attachments/assets/ce9eda57-95df-4ba1-bceb-9075e8765d7f)](https://youtu.be/LtAe8EB72OI?si=wVfqa_2CUt5ZtQSO)\n\n#### Building full-stack AI agents: From project generation to code execution by Stephan Janssen\n[![DevoxxGenie Demo](https://github.com/user-attachments/assets/56c786fc-281d-4d1e-a6e3-294dfb8799df)](https://www.youtube.com/watch?v=I4qPgRHCLBY)\n\n### 🗂️ Video Tutorials:\n- [Building full-stack AI agents: From project generation to code execution (Devoxx France 2025)](https://www.youtube.com/watch?v=I4qPgRHCLBY)\n- [Agentic programming with DevoxxGenie (VoxxedDays Bucharest 2025)](https://www.youtube.com/watch?v=ZRNx9ZOoxsg)\n- [DevoxxGenie in action (Devoxx Belgium 2024)](https://www.youtube.com/watch?v=c5EyVLAXaGQ)\n- [How ChatMemory works](https://www.youtube.com/watch?v=NRAe4d7n6_4)\n- [Hands-on with DevoxxGenie](https://youtu.be/Rs8S4rMTR9s?feature=shared)\n- [The Era of AAP: Ai Augmented Programming using only Java](https://www.youtube.com/watch?v=yvgvALVs3xo)\n- [DevoxxGenie Demo 2024](https://www.youtube.com/live/kgtctcbA6WE?feature=shared\u0026t=124)\n\n### Blog Posts:\n\n- [DevoxxGenie: Your AI Assistant for IDEA](https://mydeveloperplanet.com/2024/10/08/devoxxgenie-your-ai-assistant-for-idea/)\n- [The Devoxx Genie IntelliJ Plugin Provides Access to Local or Cloud Based LLM Models](https://www.infoq.com/news/2024/05/devoxx-genie-intellij-plugin/)\n- [10K+ Downloads Milestone for DevoxxGenie!](https://www.linkedin.com/pulse/10k-downloads-milestone-devoxxgenie-stephan-janssen-hokce/)\n\n### Key Features:\n\n- **🔒 [Security Scanning](https://genie.devoxx.com/docs/features/security-scanning)** *(v0.9.17+)*: Run Gitleaks (secret detection), OpenGrep (SAST) and Trivy (SCA/CVEs) as LLM agent tools. Each finding is auto-created as a prioritised Backlog.md task. Enable in Settings → Security Scanning.\n- **📋 [Spec Driven Development](https://genie.devoxx.com/docs/features/spec-driven-development)** *(v0.9.7+)*: Define tasks in Backlog.md, browse them in the Spec Browser (Task List + Kanban Board), and let the Agent implement them. 17 built-in backlog tools for full CRUD on tasks, documents, and milestones. Use the [Agent Loop](https://genie.devoxx.com/docs/features/sdd-agent-loop) to run multiple tasks in batch with dependency ordering *(v0.9.8+)*.\n- **🆕 [ACP Runners](https://genie.devoxx.com/docs/features/acp-runners)** *(v0.9.10+)*: Communicate with external agents (Kimi, Gemini CLI, Kilocode, Claude Code, Copilot) via the Agent Communication Protocol with structured streaming, conversation history, and capability negotiation.\n- **🔌 [Plugin Integration API](https://genie.devoxx.com/blog/devoxxgenie-plugin-integrations)** *(v0.9.12+)*: Let other IntelliJ plugins send prompts or create Backlog tasks via a reflection-based `ExternalPromptService` — no compile-time dependency required. Two POC integrations available: [SonarLint DevoxxGenie](https://github.com/stephanj/sonarlint-devoxxgenie-intellij) and [SpotBugs DevoxxGenie](https://github.com/stephanj/spotbugs-devoxxgenie-plugin).\n- **🖥️ [CLI Runners](https://genie.devoxx.com/docs/features/cli-runners)** *(v0.9.9+)*: Execute prompts and spec tasks via external CLI tools (Claude Code, GitHub Copilot, Codex, Gemini CLI, Kimi) directly from the chat interface or the Spec Browser.\n- **✨ [Inline Code Completion](https://genie.devoxx.com/docs/features/inline-completion)**: (v0.9.6+) AI-powered code suggestions as you type using Fill-in-the-Middle (FIM) models. Supports both Ollama and LM Studio with models like StarCoder2, Qwen2.5-Coder, and DeepSeek-Coder.\n- **🤖 [Agent Mode](https://genie.devoxx.com/docs/features/agent-mode)** *(v0.9.4+)*: Autonomous code exploration and modification with built-in tools (read, write, edit, search files). Parallel sub-agents investigate multiple areas of your codebase concurrently, each with configurable provider/model. Enable in Agent Settings!\n- **🔥️ [MCP Support with Marketplace](https://genie.devoxx.com/docs/features/mcp_expanded)**: Browse and install MCP servers from the integrated marketplace. Add MCP servers and use them in your conversations!\n- **🗂️ [DEVOXXGENIE.md](https://genie.devoxx.com/docs/configuration/devoxxgenie-md)**: By incorporating this into the system prompt, the LLM will gain a deeper understanding of your project and provide more relevant responses.\n- **📸 [DnD images](https://genie.devoxx.com/docs/features/dnd-images)**: You can now DnD images with multimodal LLM's.\n- **🧐 [RAG Support](https://genie.devoxx.com/docs/features/rag)**: Retrieval-Augmented Generation (RAG) support for automatically incorporating project context into your prompts.\n- **👀 [Chat History](https://genie.devoxx.com/docs/features/chat-interface)**: Your chats are stored locally, allowing you to easily restore them in the future.\n- **🧠 [Project Scanner](https://genie.devoxx.com/docs/features/project-scanner)**: Add source code (full project or by package) to prompt context when using Anthropic, OpenAI or Gemini.\n- **💰 [Token Cost Calculator](https://genie.devoxx.com/docs/configuration/token-cost)**: Calculate the cost when using Cloud LLM providers.\n- **🔍 [Web Search](https://genie.devoxx.com/docs/features/web-search)**: Search the web for a given query using Google or Tavily.\n- **🏎️ [Streaming responses](https://genie.devoxx.com/docs/features/chat-interface)**: See each token as it's received from the LLM in real-time.\n- **🧐 [Abstract Syntax Tree (AST) context](https://genie.devoxx.com/docs/features/project-scanner)**: Automatically include parent class and class/field references in the prompt for better code analysis.\n- **💬 [Chat Memory Size](https://genie.devoxx.com/docs/features/chat-memory)**: Set the size of your chat memory, by default its set to a total of 10 messages (system + user \u0026 AI msgs).\n- **☕️ 100% Java**: An IDEA plugin using local and cloud based LLM models. Fully developed in Java using [Langchain4J](https://github.com/langchain4j/langchain4j)\n- **👀 [Code Highlighting](https://genie.devoxx.com/docs/features/chat-interface)**: Supports highlighting of code blocks.\n- **💬 [Chat conversations](https://genie.devoxx.com/docs/features/chat-memory)**: Supports chat conversations with configurable memory size.\n- **📁 [Add files \u0026 code snippets to context](https://genie.devoxx.com/docs/features/chat-interface)**: You can add open files to the chat window context for producing better answers or code snippets if you want to have a super focused window\n\n### Start in 5 Minutes with local LLM\n\n- Download and start [Ollama](https://ollama.com/download)\n- Open terminal and download a model using command \"ollama run llama3.2\"\n- Start your IDEA and go to plugins \u003e Marketplace and enter \"Devoxx\"\n- Select \"DevoxxGenie\" and install plugin\n- In the DevoxxGenie window select Ollama and available model\n- Start prompting\n\n### Start in 2 Minutes using Cloud LLM\n\n- Start your IDEA and go to plugins \u003e Marketplace and enter \"Devoxx\"\n- Select \"DevoxxGenie\" and install plugin\n- Click on DevoxxGenie cog (settings) icon and click on Cloud Provider link icon to create API KEY\n- Paste API Key in Settings panel\n- In the DevoxxGenie window select your cloud provider and model\n- Start prompting\n\n### 🗂️ Model Context Protocol servers support\n\n**[📖 Full MCP Documentation](https://genie.devoxx.com/docs/mcp-support)**\n\nInitial support for Model Context Protocol (MCP) server tools including debugging of MCP requests \u0026 responses!\nMCP support is a crucial feature towards ful Agentic support within DevoxxGenie.\nWatch [short demo](https://www.youtube.com/watch?v=zOPhYvgJKJU) of MCP in action using DevoxxGenie\n\n\u003cimg width=\"1399\" alt=\"Screenshot 2025-03-22 at 17 29 33\" src=\"https://github.com/user-attachments/assets/0f8f75c9-ce85-43e8-a244-aa796c85681a\" /\u003e\n\n\nExample of the Filesystem-server MCP which allows you to interact with the given directory.\n\n\n\u003cimg width=\"1399\" alt=\"Screenshot 2025-03-22 at 17 29 48\" src=\"https://github.com/user-attachments/assets/db27a7c3-622a-4d9a-9635-5561836d28c7\" /\u003e\n\nGo to the DevoxxGenie settings to enable and add your MCP servers. Browse the **MCP Marketplace** to discover and install servers with just a few clicks!\n\n\u003cimg width=\"663\" height=\"685\" alt=\"MCP\" src=\"https://github.com/user-attachments/assets/cb1c62c3-7ac4-4c90-936b-c3f58deaeabf\" /\u003e\n\n\u003cimg width=\"901\" height=\"596\" alt=\"MCPMarketplace\" src=\"https://github.com/user-attachments/assets/95298cf1-04e0-439a-86ea-db821e8565a5\" /\u003e\n\nWhen configured correctly you can see the tools that the MCP brings to your LLM conversations\n\n\u003cimg width=\"1038\" alt=\"Screenshot 2025-03-22 at 17 30 40\" src=\"https://github.com/user-attachments/assets/347bcf2c-feb3-41bc-b410-7fbee2ef2f85\" /\u003e\n\n[Agentic Magic in action](https://www.youtube.com/watch?v=T3o6t8tjoq4) 👀✨🧠\n\n### 🗂️ DEVOXXGENIE.md\n\n**[📖 DEVOXXGENIE.md Documentation](https://genie.devoxx.com/docs/devoxxgenie-md)**\n\nYou can now generate a **DEVOXXGENIE.md** file directly from the \"Prompts\" plugin settings page or just use /init in the prompt input field.\n\n\u003cimg width=\"998\" alt=\"Screenshot 2025-03-14 at 17 26 43\" src=\"https://github.com/user-attachments/assets/95f7fc0e-3764-48f9-9d3c-dc19ee4ae258\" /\u003e\n\nBy incorporating this into the system prompt, the LLM will gain a deeper understanding of your project and provide more relevant responses. \nThis is a first step toward enabling agentic AI features for DevoxxGenie 🔥\n\nOnce generated, you can edit the DEVOXXGENIE.md file and add more details about your project as needed.\n\n\u003cimg width=\"1101\" alt=\"Screenshot 2025-03-14 at 17 27 54\" src=\"https://github.com/user-attachments/assets/740a1a16-33b9-4fda-898f-bc6506f9b027\" /\u003e\n\n### 📸 \"I can see\" DnD images\n\nYou can now drag and drop images (and project files) directly into the input field when working with multimodal LLMs like Google Gemini, Anthropic Claude, ChatGPT 4.x, \nor even local models such as [LLaVA](https://ollama.com/library/llava)\n\n\u003cimg width=\"1689\" alt=\"Screenshot 2025-01-30 at 13 26 57\" src=\"https://github.com/user-attachments/assets/313ab609-abf1-4b3a-b4d3-3802362a713b\" /\u003e\n\u003cimg width=\"859\" alt=\"DnDImagesExample\" src=\"https://github.com/user-attachments/assets/b99ed4e6-091e-484f-87cf-12c879c661a5\" /\u003e\n\nYou can even combine screenshots together with some code and then ask related questions!\n\n### 🔥 RAG Feature\n\n**[📖 Full RAG Documentation](https://genie.devoxx.com/docs/rag)**\n\n\u003cimg width=\"749\" alt=\"RAG\" src=\"https://github.com/user-attachments/assets/ea34247a-b33d-40a2-b96a-d10de0868dfa\"\u003e\n\nDevoxx Genie now includes starting from v0.4.0 a Retrieval-Augmented Generation (RAG) feature, which enables advanced code search and retrieval capabilities. \nThis feature uses a combination of natural language processing (NLP) and machine learning algorithms to analyze code snippets and identify relevant results based on their semantic meaning.\n\nWith RAG, you can:\n\n* Search for code snippets using natural language queries\n* Retrieve relevant code examples that match your query's intent\n* Explore related concepts and ideas in the codebase\n\nWe currently use Ollama and Nomic Text embedding to generates vector representations of your project files.\nThese embedding vectors are then stored in a Chroma DB (v0.6.2) running locally within Docker. \nThe vectors are used to compute similarity scores between search queries and your code all running locally.\n\nThe RAG feature is a significant enhancement to Devoxx Genie's code search capabilities, enabling developers to quickly find relevant code examples and accelerate their coding workflow.\n\nSee also [Demo](https://www.youtube.com/watch?v=VVU8x45jIt4)\n\nExpecting to add also [GraphRAG](https://github.com/devoxx/DevoxxGenieIDEAPlugin/issues/474) in the near future.\n\n### LLM Settings\nIn the IDEA settings you can modify the REST endpoints and the LLM parameters.  Make sure to press enter and apply to save your changes.\n\nWe now also support Cloud based LLMs, you can paste the API keys on the Settings page. \n\n\u003cimg width=\"1072\" alt=\"Settings\" src=\"https://github.com/user-attachments/assets/a88f1ae8-55dc-4c6b-b5eb-ec0c3d70b28f\"\u003e\n\n### Smart Model Selection and Cost Estimation\nThe language model dropdown is not just a list anymore, it's your compass for smart model selection.\n\n\u003cimg width=\"698\" height=\"295\" alt=\"Opus46\" src=\"https://github.com/user-attachments/assets/8174c0ad-3943-4c4c-aebd-4ba50c90d015\" /\u003e\n\nSee available context window sizes for each cloud model\nView associated costs upfront\nMake data-driven decisions on which model to use for your project\n\n### Add Project to prompt \u0026 clipboard\n\nYou can now add the full project to your prompt IF your selected cloud LLM has a big enough window context.\n\n![AddFull](https://github.com/devoxx/DevoxxGenieIDEAPlugin/assets/179457/be014cf1-ee01-428a-bd75-55acc82627fb)\n\n### Calc Cost\n\nLeverage the prompt cost calculator for precise budget management. Get real-time updates on how much of the context window you're using.\n\n![AddCalcProject](https://github.com/devoxx/DevoxxGenieIDEAPlugin/assets/179457/0c971331-40fe-47a4-8ede-f349fa40c00c)\n\nSee the input/output costs and window context per Cloud LLM.  Eventually we'll also allow you to edit these values.\n\n![Cost](https://github.com/devoxx/DevoxxGenieIDEAPlugin/assets/179457/422fc829-fc9f-42f4-a8e5-c33ec5a239fc)\n\n### Handling Massive Projects?\n\"But wait,\" you might say, \"my project is HUGE!\" 😅 \n\nFear not! We've got options:\n\n1. Leverage Gemini's Massive Context: \n\nGemini's colossal 1 million token window isn't just big, it's massive. We're talking about the capacity to digest approximately 30,000 lines of code in a single go. That's enough to digest most codebases whole, from the tiniest scripts to some decent projects.\n\nBut if that's not enough you have more options...\n\n2. Smart Filtering: \n\nThe new \"Copy Project\" panel lets you: \n\nExclude specific directories \nFilter by file extensions\nRemove JavaDocs to slim down your context\n\n\u003cimg width=\"1072\" alt=\"ScanProject\" src=\"https://github.com/user-attachments/assets/51523394-1b36-442b-adfa-91d0c7a8182e\"\u003e\n\n3. Selective Inclusion \n\nRight-click to add only the most relevant parts of your project to the context.\n\n![RightClick](https://github.com/devoxx/DevoxxGenieIDEAPlugin/assets/179457/a86c311a-4589-41f9-bb4a-c8c4f0b884ee)\n\n## The Power of Full Context: A Real-World Example\nThe DevoxxGenie project itself, at about 70K tokens, fits comfortably within most high-end LLM context windows. \nThis allows for incredibly nuanced interactions – we're talking advanced queries and feature requests that leave tools like GitHub Copilot scratching their virtual heads!\n\n## Support for JLama \u0026 LLama3.java\nDevoxxGenie now also supports the 100% Modern Java LLM inference engines: [JLama](https://github.com/tjake/Jlama).\n\nJLama offers a REST API compatible with the widely-used OpenAI API. Use the Custom OpenAI URL to connect.\n\n![image](https://github.com/user-attachments/assets/65096be3-2b34-4fea-8295-d63e04066390)\n\nYou can also integrate it seamlessly with [Llama3.java](https://github.com/stephanj/Llama3JavaChatCompletionService) but using the Spring Boot OpenAI API wrapper coupled with the JLama DevoxxGenie option.\n\n## Local LLM Cluster with Exo\n\nUse the custom OpenAI URL to connect to Exo, a local LLM cluster for Apple Silicon which allows you to run Llama 3.1 8b, 70b and 405b on your own Apple computers 🤩\n\n![image](https://github.com/user-attachments/assets/a79033ff-d9dd-442d-aa92-0fc70cc37747)\n\n## Test Driven Generation (TDG) - Experimental\n\nWrite a unit test and let DevoxxGenie generated the implementation for that unit test. \nThis approach was explained by Bouke Nijhuis in his [Devoxx Belgium presentation](https://youtu.be/YRFpyGbp6h4?si=mYzJuVRMnclZJMIM)\n\nAn demo on how to accomplish this can be seen in this 𝕏 [post](https://x.com/Stephan007/status/1854949507710198209).\n\n## DeepSeek R1 \u0026 DevoxxGenie 🔥\n\nAs of today (February 2, 2025), alongside the DeepSeek API Key, you can access the full 671B model for FREE using either [Nvidia](https://build.nvidia.com/deepseek-ai/deepseek-r1) or [Chutes](https://chutes.ai)!\nSimply update the Custom OpenAI URL, Model and API Key on the Settings page as follows:\n\n\u003cimg width=\"1199\" alt=\"Screenshot 2025-02-01 at 14 09 02\" src=\"https://github.com/user-attachments/assets/dfdf0b14-b6af-4e01-ab37-5ff2a92035c9\" /\u003e\n\nChutes URL : https://chutes-deepseek-ai-deepseek-r1.chutes.ai/v1/\n\nNvidia URL : https://integrate.api.nvidia.com/v1\n\n## Grok \u0026 DevoxxGenie\n\nCreate an account on [Grok](https://console.x.ai/) and generated an API Key.  Now open the DevoxxGenie settings and enter the OpenAI compliant URL for Grok, the model you want to use and your API Key.\n\n\u003cimg width=\"932\" alt=\"Screenshot 2025-02-18 at 08 52 27\" src=\"https://github.com/user-attachments/assets/1b1dfc7b-25b1-4be8-8687-fe042f6043a8\" /\u003e\n\n\n### Installation:\n\n**[📖 Full Installation Guide](https://genie.devoxx.com/docs/category/installation)**\n\n- **From IntelliJ IDEA**: Go to `Settings` -\u003e `Plugins` -\u003e `Marketplace` -\u003e Enter 'Devoxx' to find [plugin](https://plugins.jetbrains.com/plugin/24169-devoxxgenie) OR Install plugin from Disk\n- **From Source Code**: Clone the repository, build the plugin using `./gradlew buildPlugin`, and install the plugin from the `build/distributions` directory and select file 'DevoxxGenie-X.Y.Z.zip'\n\n### Requirements:\n\n- **IntelliJ** minimum version is 2023.3.4\n- **Java** minimum version is JDK 17\n\n### Build\n\nGradle IntelliJ Plugin prepares a ZIP archive when running the buildPlugin task.  \nYou'll find it in the build/distributions/ directory\n\n```shell\n./gradlew buildPlugin \n```\n\n### Publish plugin\n\nIt is recommended to use the publishPlugin task for releasing the plugin\n\n```shell\n./gradlew publishPlugin\n```\n\n\n### Usage:\n1) Select an LLM provider from the DevoxxGenie panel (right corner)\n2) Select some code \n4) Enter shortcode command review, explain, generate unit tests of the selected code or enter a custom prompt.\n\nEnjoy!\n\n\n## Contribute\n\n**[📖 Contributing Guide](https://genie.devoxx.com/docs/contributing)**\n\n### Understanding the Prompt Flow\n\nThe DevoxxGenie IDEA Plugin processes user prompts through the following steps:\n\n### 1️⃣ User Inputs a Prompt\n- [`UserPromptPanel`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/ui/panel/UserPromptPanel.java) → Captures the prompt from the UI.\n- [`PromptSubmissionListener.onPromptSubmitted()`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/ui/listener/PromptSubmissionListener.java) → Listens for the submission event.\n- [`PromptExecutionController.handlePromptSubmission()`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/controller/PromptExecutionController.java) → Starts execution.\n\n### 2️⃣ Processing the Prompt\n- [`PromptExecutionService.executeQuery()`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/service/PromptExecutionService.java) → Handles token usage calculations and checks RAG/GitDiff settings.\n- [`ChatPromptExecutor.executePrompt()`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/service/ChatPromptExecutor.java) → Dispatches the prompt to the selected **LLM provider**.\n- [`LLMProviderService.getAvailableModelProviders()`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/service/LLMProviderService.java) → Retrieves the appropriate model from `ChatModelFactory`.\n\n### 3️⃣ LLM Model Inference\n- [`ChatModelFactory.getModels()`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/chatmodel/ChatModelFactory.java) → Gets the models for the select LLM provider\n- **Cloud-based LLMs:**\n  - [`OpenAIChatModelFactory.createChatModel()`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/chatmodel/cloud/openai/OpenAIChatModelFactory.java)\n  - [`AnthropicChatModelFactory.createChatModel()`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/chatmodel/cloud/anthropic/AnthropicChatModelFactory.java)\n  - ...\n    \n- **Local models:**\n  - [`OllamaChatModelFactory.createChatModel()`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/chatmodel/local/ollama/OllamaChatModelFactory.java)\n  - [`GPT4AllChatModelFactory.createChatModel()`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/chatmodel/local/gpt4all/GPT4AllChatModelFactory.java)\n  - ...\n\n### 4️⃣ Response Handling\n- **If streaming is enabled:**\n  - [`StreamingPromptExecutor.execute()`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/service/streaming/StreamingPromptExecutor.java) → Begins token-by-token streaming.\n  - [`ChatStreamingResponsePanel.createHTMLRenderer()`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/ui/panel/ChatStreamingResponsePanel.java) → Updates UI in real time.\n\n- **If non-streaming:**\n  - [`PromptExecutionService.executeQuery()`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/service/PromptExecutionService.java) → Formats the full response.\n  - [`ChatResponsePanel.displayResponse()`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/ui/panel/ChatResponsePanel.java) → Renders the text and code blocks.\n\n### 5️⃣ Enhancements (RAG)\n#### **RAG (Retrieval-Augmented Generation)**\n- **Indexing Source Code for Retrieval**\n  - [`ProjectIndexerService.indexFiles()`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/service/rag/ProjectIndexerService.java) → Indexes project files\n  - [`ChromaDBIndexService.storeEmbeddings()`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/service/chromadb/ChromaEmbeddingService.java) → Stores embeddings in **ChromaDB**.\n\n- **Retrieval \u0026 Augmentation**\n  - [`SemanticSearchService.search()`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/service/rag/SemanticSearchService.java) → Fetches relevant indexed code.\n  - [`SemanticSearchReferencesPanel`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/ui/panel/chatresponse/SemanticSearchReferencesPanel.java) → Displays retrieved results.\n\n### 6️⃣ Final Display\n- The response is rendered in `ChatResponsePanel` with:\n  - [`ResponseHeaderPanel`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/ui/panel/chatresponse/ResponseHeaderPanel.java) → Shows metadata (LLM name, execution time).\n  - [`ResponseDocumentPanel`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/ui/panel/chatresponse/ResponseDocumentPanel.java) → Formats text \u0026 code snippets.\n  - [`MetricExecutionInfoPanel`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/ui/panel/chatresponse/MetricExecutionInfoPanel.java) → Displays token usage and cost.\n\n### Understanding the Flow\n\nBelow is a **detailed flow diagram** illustrating this workflow:\n\n![DevoxxGenie Prompt Flow](docs/prompt_flow.png)\n\n---\n\n### How to Get Started\n\n- Start by **exploring [`PromptExecutionController.java`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/controller/PromptExecutionController.java)** to see how prompts are routed.\n- Modify [`ChatResponsePanel.java`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/ui/panel/ChatResponsePanel.java) if you want to **enhance response rendering**.\n- To **add a new LLM provider**, create a factory under [`chatmodel/cloud/`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/chatmodel/cloud/) or [`chatmodel/local/`](https://github.com/devoxx/DevoxxGenieIDEAPlugin/blob/master/src/main/java/com/devoxx/genie/chatmodel/local/).\n\nWant to contribute? **Submit a PR!** 🚀\n","funding_links":[],"categories":["Browser-extensions"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdevoxx%2Fdevoxxgenieideaplugin","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdevoxx%2Fdevoxxgenieideaplugin","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdevoxx%2Fdevoxxgenieideaplugin/lists"}