https://github.com/leotodisco/tars
VS Code extension for real-time, LLM-powered code explanations with Theory of Mind and visual annotations.
https://github.com/leotodisco/tars
agentic-ai javascript langgraph vscode-extension
Last synced: 3 months ago
JSON representation
VS Code extension for real-time, LLM-powered code explanations with Theory of Mind and visual annotations.
- Host: GitHub
- URL: https://github.com/leotodisco/tars
- Owner: leotodisco
- License: mit
- Created: 2025-03-01T13:47:08.000Z (8 months ago)
- Default Branch: master
- Last Pushed: 2025-07-21T15:46:07.000Z (3 months ago)
- Last Synced: 2025-07-21T17:50:08.020Z (3 months ago)
- Topics: agentic-ai, javascript, langgraph, vscode-extension
- Language: JavaScript
- Homepage:
- Size: 1.24 MB
- Stars: 3
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- License: LICENSE
Awesome Lists containing this project
README
# π§ TARS β Intelligent Code Explanations for VS Code
**TARS** is a Visual Studio Code extension that provides real-time, developer-personalized explanations of your code, powered by **LLM Agents** built with [LangGraph](https://www.langgraph.dev/).
It integrates cutting-edge concepts like **Self-Refinement**, **Theory of Mind**, and **Perspective Taking**, allowing you to receive **accurate, personalized, and visually contextualized explanations** β either locally for full privacy or via API-powered cloud models.
---
## β¨ Features
- βοΈ **Real-Time Code Explanation**
Automatically analyzes and explains functions, classes, and logical blocks in natural language.- π§ **Theory of Mind + Perspective Taking**
TARS adapts its explanations to your **experience level, goals, and preferences**. This includes a form of **Perspective Taking**, where the agent reasons based on your mental profile to communicate effectively.- π **Self-Refinement Mechanism**
Before giving you an answer, the agent refines its own output using LangGraphβs stateful multi-step reasoning.- πΌοΈ **Visual Inline Explanations**
Explanations appear next to the code as editor decorations, making them intuitive and non-intrusive.- π **Local or Cloud Execution**
Choose privacy or performance: run locally with [Ollama](https://ollama.ai), or use OpenAI / DeepSeek for broader LLM access.---
## π€ Supported LLM Providers
1. **Ollama** (local β fully private)
2. **OpenAI** (via API key)
3. **DeepSeek** (via API key)---
## π Getting Started
### 1. Install the Extension
Open the VS Code command palette (`F1` or `Ctrl+Shift+P` on Windows/Linux, `Cmd+Shift+P` on macOS) and search for:
> **`Extensions: Install from VSIX`**
Then select the `.vsix` file provided in the **Releases** section of the repository.
> βΉοΈ The `.vsix` file is precompiled, so you don't need to run `npm install` or `npm run compile`.
---
### 2. Configure Tars
Open the **Command Palette** (`F1`, `Ctrl+Shift+P`, or `Cmd+Shift+P`) and run the command:
> βοΈ **`Configure Tars`**
You will be prompted to provide the following information:
1. **Select the backend**: Choose whether you want to use the **OpenAI API** or run a **local model** via **Ollama**.
2. **Model configuration**: Enter the **model name** (e.g., `gpt-4`, `deepseek-coder`, `llama2`, etc.).
3. **API Key (if using OpenAI)**: Provide your **OpenAI API key** when prompted.> π‘ You can rerun `Configure Tars` at any time to change these settings.
For **Ollama**, make sure the local server is running.---
#### β Main Commands
To actually use TARS, run the following commands:
- `TARS: Theory Of Mind Profiler`
β Starts a short quiz to personalize explanations based on your mental model.- `TARS: Explain Code`
β Analyzes and explains the selected code based on your profile.#### π§ Utility Commands
- `TARS: Flush Tars State`
β Clears all stored user profiling (useful to re-run the quiz or reset state).- `TARS: Log User Mind`
β Logs the current profiling data to the console (for debugging).---
## π§ What is Theory of Mind + Perspective Taking?
TARS uses a **Theory of Mind (ToM)** mechanism to model you as a user:
Your experience level, learning goals, preferred tone, and more are captured through an initial quiz.
Then, using **Perspective Taking**, the LLM adjusts its explanations as if it were reasoning from your point of view β just like a good teacher would.---
## ποΈ Architecture Overview
- π **Code Construct Parser** β Extracts functions, classes, and logical blocks using VS Code Symbol API. Also supports multi-file links in your codebase!
- π **LangGraph Agent** β Powers the self-refining loop and perspective-aware reasoning.
- π§ **ToM Profiler** β Captures your mental state and injects it into agent prompts.
- π― **Decorator Engine** β Renders inline visual explanations as you code.---
## π License
MIT Β© 2025 β Created by [Leopoldo Todisco](https://github.com/leotodisco)
---
## π€ Contributing
Pull requests, suggestions, and contributions are welcome.
Feel free to open issues to propose ideas or report bugs!