{"id":13584864,"url":"https://github.com/haseeb-heaven/code-interpreter","last_synced_at":"2026-04-07T12:01:12.064Z","repository":{"id":198970482,"uuid":"701162675","full_name":"haseeb-heaven/code-interpreter","owner":"haseeb-heaven","description":"An innovative open-source Code Interpreter with (GPT,Gemini,Claude,LLaMa) models.","archived":false,"fork":false,"pushed_at":"2026-04-05T15:11:58.000Z","size":12494,"stargazers_count":276,"open_issues_count":8,"forks_count":42,"subscribers_count":6,"default_branch":"main","last_synced_at":"2026-04-05T15:14:37.016Z","etag":null,"topics":["bard-coder","bing-coder","bingai","chatbot","chatgpt","code-interpreter","code-llama","google-bard","gpt","gpt-4","huggingface","interpreter","llm","llm-coder","open-interpreter","openai","phind-coder","python","wizard-coder"],"latest_commit_sha":null,"homepage":"https://pypi.org/project/open-code-interpreter/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/haseeb-heaven.png","metadata":{"files":{"readme":"README.md","changelog":"history/history.json","contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2023-10-06T04:08:43.000Z","updated_at":"2026-04-01T17:17:38.000Z","dependencies_parsed_at":"2023-12-01T04:26:25.286Z","dependency_job_id":"67a41827-5923-492e-bd74-b75bece81704","html_url":"https://github.com/haseeb-heaven/code-interpreter","commit_stats":null,"previous_names":["haseeb-heaven/open-code-interpreter","haseeb-heaven/code-interpreter"],"tags_count":5,"template":false,"template_full_name":null,"purl":"pkg:github/haseeb-heaven/code-interpreter","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/haseeb-heaven%2Fcode-interpreter","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/haseeb-heaven%2Fcode-interpreter/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/haseeb-heaven%2Fcode-interpreter/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/haseeb-heaven%2Fcode-interpreter/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/haseeb-heaven","download_url":"https://codeload.github.com/haseeb-heaven/code-interpreter/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/haseeb-heaven%2Fcode-interpreter/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31511784,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-07T03:10:19.677Z","status":"ssl_error","status_checked_at":"2026-04-07T03:10:13.982Z","response_time":105,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["bard-coder","bing-coder","bingai","chatbot","chatgpt","code-interpreter","code-llama","google-bard","gpt","gpt-4","huggingface","interpreter","llm","llm-coder","open-interpreter","openai","phind-coder","python","wizard-coder"],"created_at":"2024-08-01T15:04:34.445Z","updated_at":"2026-04-07T12:01:12.057Z","avatar_url":"https://github.com/haseeb-heaven.png","language":"Python","readme":"![Interpreter](https://github.com/haseeb-heaven/open-code-interpreter/blob/main/resources/movie.gif?raw=true)\n\n### **Hosting and Spaces:**\n[![Colab](https://img.shields.io/badge/Google-Colab-blue)](https://colab.research.google.com/drive/1jGg-NavH8t4W2UVs8MyVMv8bs49qggfA?usp=sharing)\n[![Replit](https://img.shields.io/badge/Replit-IDE-blue)](https://replit.com/@HaseebMir/open-code-interpreter)\n[![PyPi](https://img.shields.io/badge/PyPi-Package-blue)](https://pypi.org/project/open-code-interpreter/)\n[![Building](https://github.com/haseeb-heaven/Open-Code-Interpreter/actions/workflows/python-app.yml/badge.svg)](https://github.com/haseeb-heaven/Open-Code-Interpreter/actions/workflows/python-app.yml)\n\n### **Support Project:**\n\u003ca href=\"https://www.buymeacoffee.com/haseebheaven\"\u003e\n    \u003cimg src=\"https://img.buymeacoffee.com/button-api/?text=Buy%20me%20a%20coffee\u0026emoji=\u0026slug=haseebheaven\u0026button_colour=40DCA5\u0026font_colour=ffffff\u0026font_family=Cookie\u0026outline_colour=000000\u0026coffee_colour=FFDD00\" width=\"200\" height=\"50\" /\u003e\n\u003c/a\u003e\n\u003ca href=\"https://ko-fi.com/heavenhm\"\u003e\n    \u003cimg src=\"https://img.shields.io/badge/KoFi-ffdd00?style=for-the-badge\u0026logo=Ko-fi\u0026logoColor=orange\" width=\"200\" height=\"50\" /\u003e\n\u003c/a\u003e\n\n**Welcome to Code-Interpreter 🎉,** an open-source tool that transforms natural language instructions into executable code using **OpenAI**, **Gemini**, **Groq**, **Claude**, **DeepSeek**, **NVIDIA**, **Z AI**, **Browser Use**, and **HuggingFace** models. It executes code safely and supports vision models for image processing.\n\nSupports tasks like file operations, image editing, video processing, data analysis, and more. Works on Windows, MacOS, and Linux.\n\n## **Why Unique?**\n\nCommitted to being **free** and **simple** - no downloads or tedious setups required. Works on Windows, Linux, macOS.\n\n## Table of Contents\n- [Features](#features)\n- [Installation](#installation)\n- [Usage](#usage)\n- [Examples](#examples)\n- [TUI Screenshots](#tui-screenshots)\n- [Settings](#settings)\n- [Contributing](#contributing)\n- [Versioning](#versioning)\n- [Changelog](#changelog)\n- [License](#license)\n- [Acknowledgments](#acknowledgments)\n\n## **Installation**\n\n### Installation with Python package manager\nTo install Code-Interpreter, run the following command:\n\n```bash\npip install open-code-interpreter\n```\n\nTo run the interpreter with Python:\n\n```bash\npython interpreter.py -m 'z-ai-glm-5' -md 'code'\n```\n\nMake sure you install required packages before running the interpreter and have API keys setup in the `.env` file.\n\n### Installation with Git\nTo get started with Code-Interpreter, follow these steps:\n\n1. Clone the repository:\n\n```bash\ngit clone https://github.com/haseeb-heaven/code-interpreter.git\ncd code-interpreter\n```\n\n2. Install the required packages:\n\n```bash\npip install -r requirements.txt\n```\n\n3. Copy the example environment file and add the keys you plan to use:\n\n```bash\ncopy .env.example .env\n```\n\n## API Key setup for All models\n\nFollow the steps below to obtain and set up the API keys for each service:\n\n1. **Obtain the API keys:**\n    - HuggingFace: Visit [HuggingFace Tokens](https://huggingface.co/settings/tokens) and get your Access Token.\n    - Google Gemini: Visit [Google AI Studio](https://makersuite.google.com/app/apikey) and click on the **Create API Key** button.\n    - OpenAI: Visit [OpenAI Dashboard](https://platform.openai.com/account/api-keys), sign up or log in, navigate to the API section in your account dashboard, and click on the **Create New Key** button.\n    - Groq AI: Visit [Groq AI Console](https://console.groq.com/keys), sign up or log in, and click on the **Create API Key** button.\n    - Anthropic AI: Visit [Anthropic AI Console](https://console.anthropic.com/settings/keys), sign up or log in, and click on the **Create Key** button.\n    - NVIDIA API Catalog: Visit [NVIDIA Build](https://build.nvidia.com/), create a key, and use `NVIDIA_API_KEY`.\n    - Z AI: Visit [Z AI Docs](https://docs.z.ai/) and use `Z_AI_API_KEY`.\n    - OpenRouter: Visit [OpenRouter Keys](https://openrouter.ai/settings/keys) and use `OPENROUTER_API_KEY`.\n    - Browser Use: Visit [Browser Use Docs](https://docs.browser-use.com/) and use `BROWSER_USE_API_KEY`.\n\n2. **Save the API keys:**\n\nCreate a `.env` file in your project root directory and add the following lines:\n\n```bash\nexport HUGGINGFACE_API_KEY=\"Your HuggingFace API Key\"\nexport GEMINI_API_KEY=\"Your Google Gemini API Key\"\nexport OPENAI_API_KEY=\"Your OpenAI API Key\"\nexport GROQ_API_KEY=\"Your Groq API Key\"\nexport ANTHROPIC_API_KEY=\"Your Anthropic API Key\"\nexport DEEPSEEK_API_KEY=\"Your Deepseek API Key\"\nexport NVIDIA_API_KEY=\"Your NVIDIA API Key\"\nexport Z_AI_API_KEY=\"Your Z AI API Key\"\nexport OPENROUTER_API_KEY=\"Your OpenRouter API Key\"\nexport BROWSER_USE_API_KEY=\"Your Browser Use API Key\"\n```\n\n## Offline models setup\n\nThis Interpreter supports offline models via **LM Studio** and **Ollama**. Follow the steps below:\n\n- Download any model from [LM Studio](https://lmstudio.ai/) like _Phi-2, Code-Llama, Mistral_.\n- In the app go to **Local Server** option and select the model.\n- Start the server and copy the **URL** (LM-Studio will provide you with the URL).\n- Run command `ollama serve` and copy the **URL** (Ollama will provide you with the URL).\n- Open config file `configs/local-model.json` and paste the **URL** in the `api_base` field.\n- Set the model name to `local-model` and run the interpreter.\n\n```bash\npython interpreter.py -md 'code' -m 'local-model'\n```\n\n## **Features**\n\n- 🚀 Executes generated code from instructions\n- 💾 Saves and edits code with advanced editor\n- 📡 Supports offline models via LM Studio and Ollama\n- 📜 Command history and mode selection\n- 🧠 Multiple models and languages (Python/JavaScript)\n- 👀 Code review before execution\n- 🛡️ Safe sandbox execution with timeout and security\n- 🔁 Self-repair for failed executions\n- 💻 Cross-platform (Windows/macOS/Linux)\n- 🤝 Integrates with HuggingFace, OpenAI, Gemini, Groq, Claude, DeepSeek, NVIDIA, Z AI, OpenRouter, Browser Use\n- 🎯 Versatile tasks: file ops, image/video editing, data analysis\n\n## **Safety Features**\n\n### Mode Indicator\nThe interpreter displays the current safety mode in the session banner:\n- **[SAFE MODE]** - Default mode with safety restrictions enabled (green)\n- **[UNSAFE MODE ⚠️]** - Unrestricted mode (red with warning emoji)\n\n### Dangerous Operation Handling\nThe interpreter handles dangerous operations with a single confirmation prompt:\n\n**SAFE MODE:**\n- Dangerous operations are **blocked entirely** (no confirmation prompt)\n- You will see: `❌ Dangerous operation blocked in SAFE MODE.`\n- No file deletion or modification operations are allowed\n\n**UNSAFE MODE:**\n- Single prompt for ALL operations (safe or dangerous)\n- Safe operations: `Execute the code? (Y/N):`\n- Dangerous operations: `⚠️ Dangerous operation. Continue? (Y/N):`\n- Operations execute only if you confirm with `Y`\n\nTo enable unsafe mode:\n```bash\npython interpreter.py --unsafe\n```\n\nTo enable safe mode:\n```bash\npython interpreter.py --sandbox\n```\n\n\u003e **Warning:** Use unsafe mode with caution. Dangerous operations can delete or modify your files.\n\n## 🛠️ **Usage**\n\nTo use Code-Interpreter, use the following command options:\n\n- List of all **programming languages**:\n    - `python` - Python programming language.\n    - `javascript` - JavaScript programming language.\n\n- List of all **modes**:\n    - `code` - Generates code from your instructions.\n    - `script` - Generates shell scripts from your instructions.\n    - `command` - Generates single line commands from your instructions.\n    - `vision` - Generates description of image or video.\n    - `chat` - Chat with your files and data.\n\n- See [Models.MD](Models.MD) for the complete list of supported models.\n\n### Start TUI (default)\n```bash\npython interpreter.py\n```\n\n`python interpreter.py` opens the TUI and uses arrow-key navigation in a real terminal. The TUI falls back to plain text prompts when stdin is piped or not attached to a terminal.\n\n### Open CLI mode\n```bash\npython interpreter.py --cli\n```\n\n`python interpreter.py --cli` automatically picks the best configured model from your `.env` file if you do not pass `-m`.\n\n### Run with sandbox (safe)\n```bash\npython interpreter.py --tui --sandbox\n```\n\n### Run without sandbox (unsafe)\n```bash\npython interpreter.py --cli --no-sandbox\n```\n\n### Upgrade interpreter\n```bash\npython interpreter.py --upgrade\n```\n\n### Live CLI smoke validation (stable models only)\n```bash\npython scripts/validate_models_cli.py --providers gemini,groq --tier stable --mode chat\npython scripts/validate_models_cli.py --providers openai,anthropic,deepseek,huggingface --tier stable --mode chat\npython scripts/validate_models_cli.py --providers nvidia,z-ai,browser-use,openrouter --tier stable --mode chat\n```\n\n### Direct provider examples\n```bash\npython interpreter.py -m 'nvidia-nemotron' -md 'chat' -dc\npython interpreter.py -m 'z-ai-glm-5' -md 'chat' -dc\npython interpreter.py -m 'openrouter-free' -md 'chat' -dc\npython interpreter.py -m 'openrouter-qwen3-coder' -md 'chat' -dc\npython interpreter.py -m 'browser-use-bu-max' -md 'chat' -dc\n```\n\nLast verified model baseline: **April 5, 2026**.\n\n## **TUI Screenshots**\n\nThe new TUI flow is designed for fast keyboard-first setup. Run `python interpreter.py` or `python interpreter.py --tui` to launch the selector UI, then use the arrow keys to choose the mode, model, language, and runtime options.\n\n### Mode selection\nChoose between `code`, `chat`, `script`, `command`, and `vision` before the session starts.\n\n![TUI mode selection](resources/interpreter-tui-mode-selection.png)\n\n### Model selection\nPick your provider and model directly from the terminal without typing long aliases manually.\n\n![TUI model selection](resources/interpreter-tui-model-selection.png)\n\n### Live output\nAfter entering the session, generated code and execution output remain inside the terminal flow with the same safer runtime behavior used by the CLI.\n\n![TUI output](resources/interpreter-tui-output.png)\n\n### Sandbox Security\nYou can enable or disable sandbox mode directly from the terminal session. This makes it easy to switch between the safer isolated runtime and unrestricted execution when needed.\n\n![TUI sandbox enable](resources/interpreter-sandbox-enable.png)\n\nWhen sandbox mode is enabled, commands and generated code run with the same safer execution constraints used by the CLI.\n\n![TUI sandbox disable](resources/interpreter-sandbox-disable.png)\n\nWhen sandbox mode is disabled, execution runs in unsafe mode without sandbox restrictions, intended only for trusted local workflows.\n\n## 🖥️ **Interpreter Commands**\n\nHere are the available commands:\n\n- 📝 `/save` - Save the last code generated.\n- ✏️ `/edit` - Edit the last code generated.\n- ▶️ `/execute` - Execute the last code generated.\n- 🔄 `/mode` - Change the mode of interpreter.\n- 🔄 `/model` - Change the model of interpreter.\n- 📦 `/install` - Install a package from npm or pip.\n- 🌐 `/language` - Change the language of the interpreter.\n- 🧹 `/clear` - Clear the screen.\n- 🆘 `/help` - Display this help message.\n- 🚪 `/list` - List all the _models/modes/language_ available.\n- 📝 `/version` - Display the version of the interpreter.\n- 🚪 `/exit` - Exit the interpreter.\n- 🐞 `/fix` - Fix the generated code for errors.\n- ⚙️ `/settings` - Open interactive TUI settings when running with `--tui`.\n- 📜 `/log` - Toggle different modes of logging.\n- ⏫ `/upgrade` - Upgrade the interpreter.\n- 📁 `/prompt` - Switch the prompt mode _File or Input_ modes.\n- 🐞 `/debug` - Toggle Debug mode for debugging.\n- 📦 `/sandbox` - Toggles secure sandbox system.\n\n## **Settings**\n\nYou can customize the settings of the current model from the `.json` file. It contains all the necessary parameters such as `temperature`, `max_tokens`, and more.\n\n### Steps to add your own custom API Server\nTo integrate your own API server for OpenAI instead of the default server, follow these steps:\n\n1. Navigate to the `Configs` directory.\n2. Open the configuration file for the model you want to modify (`gpt-3.5-turbo.json` or `gpt-4.json`).\n3. Add the following key-value pair to the JSON object:\n   ```json\n   \"api_base\": \"https://my-custom-base.com\"\n   ```\n4. Save and close the file.\n\nNow, whenever you select that model, the system will automatically use your custom server.\n\n## **Steps to add new models**\n\n### Manual Method\n1. Copy the `.json` file and rename it to `configs/hf-model-new.json`.\n2. Modify the parameters of the model like `start_sep`, `end_sep`.\n3. Set the model name from Hugging Face: `\"model\": \"Model name here\"`.\n4. Use it like this: `python interpreter.py -m 'hf-model-new' -md 'code'`.\n5. Make sure the `-m 'hf-model-new'` matches the config file inside the `configs` folder.\n\n### Automatic Method\n1. Go to the `scripts` directory and run the `config_builder` script.\n2. For Linux/MacOS run `config_builder.sh`, for Windows run `config_builder.bat`.\n3. Follow the instructions and enter the model name and parameters.\n4. The script will automatically create the `.json` file for you.\n\n## Star History\n\n\u003ca href=\"https://star-history.com/#haseeb-heaven/open-code-interpreter\u0026Date\"\u003e\n  \u003cpicture\u003e\n    \u003csource media=\"(prefers-color-scheme: dark)\" srcset=\"https://api.star-history.com/svg?repos=haseeb-heaven/open-code-interpreter\u0026type=Date\u0026theme=dark\" /\u003e\n    \u003csource media=\"(prefers-color-scheme: light)\" srcset=\"https://api.star-history.com/svg?repos=haseeb-heaven/open-code-interpreter\u0026type=Date\" /\u003e\n    \u003cimg alt=\"Star History Chart\" src=\"https://api.star-history.com/svg?repos=haseeb-heaven/open-code-interpreter\u0026type=Date\" /\u003e\n  \u003c/picture\u003e\n\u003c/a\u003e\n\n## **Contributing**\n\nIf you're interested in contributing to **Code-Interpreter**, we'd love to have you! Please fork the repository and submit a pull request. We welcome all contributions and are always eager to hear your feedback and suggestions for improvements.\n\n## **Versioning**\n\nCurrent version: **3.2.2**\n\nQuick highlights:\n- **v3.2.2** - Added sandbox security, improved Code Interpreter architecture, fixed execution language routing, restored sandbox toggle compatibility, added subprocess security delegation, and improved safe-mode timeout handling.\n- **v3.2.1** - Added mode indicator ([SAFE MODE] or [UNSAFE MODE ⚠️]) in session banner, implemented strict safety blocking for dangerous operations in SAFE MODE, added single confirmation prompt for operations in UNSAFE MODE.\n- **v3.1.0** - Added OpenRouter free-model aliases, made `openrouter/free` the default OpenRouter selection, improved simple-task code generation, added fresh TUI screenshots, and prepared release packaging assets.\n- **v3.0.0** - Added a default execution safety sandbox, dangerous command/code circuit breaker, bounded ReACT-style repair retries after failures, clearer execution feedback, and polished CLI/TUI runtime output.\n- **v2.4.1** - Added NVIDIA, Z AI, Browser Use, `.env.example`, and `--cli` / `--tui` startup flows.\n- **v2.4.0** - 2026 model refresh across OpenAI, Gemini, Anthropic, Groq, and DeepSeek.\n\nFull release history: [CHANGELOG.md](CHANGELOG.md)\n\n---\n\n## **License**\n\nThis project is licensed under the **MIT License**. For more details, please refer to the LICENSE file.\n\nPlease note the following additional licensing details:\nThis project is a client interface only. All models are provided by their respective third-party providers and subject to their own terms of service.\n\n## **Acknowledgments**\n\n- We would like to express our gratitude to **HuggingFace**,**Google**,**META**,**OpenAI**,**GroqAI**,**AnthropicAI** for providing the models.\n- A special shout-out to the open-source community. Your continuous support and contributions are invaluable to us.\n\n## * Author**\nThis project is created and maintained by [Haseeb-Heaven](www.github.com/haseeb-heaven).\n","funding_links":["https://www.buymeacoffee.com/haseebheaven","https://img.buymeacoffee.com/button-api/?text=Buy%20me%20a%20coffee\u0026emoji=\u0026slug=haseebheaven\u0026button_colour=40DCA5\u0026font_colour=ffffff\u0026font_family=Cookie\u0026outline_colour=000000\u0026coffee_colour=FFDD00","https://ko-fi.com/heavenhm"],"categories":["Python","HarmonyOS","Building"],"sub_categories":["Windows Manager","LLM Models"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhaseeb-heaven%2Fcode-interpreter","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhaseeb-heaven%2Fcode-interpreter","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhaseeb-heaven%2Fcode-interpreter/lists"}