https://github.com/chuanqisun/iter
Prompt Engineer's Playground
https://github.com/chuanqisun/iter
devtool gpt openai openai-api playground sandbox
Last synced: 2 months ago
JSON representation
Prompt Engineer's Playground
- Host: GitHub
- URL: https://github.com/chuanqisun/iter
- Owner: chuanqisun
- License: mit
- Created: 2023-05-09T18:36:06.000Z (over 2 years ago)
- Default Branch: master
- Last Pushed: 2025-08-06T00:04:32.000Z (2 months ago)
- Last Synced: 2025-08-06T02:25:30.784Z (2 months ago)
- Topics: devtool, gpt, openai, openai-api, playground, sandbox
- Language: TypeScript
- Homepage: https://chuanqisun.github.io/iter/
- Size: 1.17 MB
- Stars: 11
- Watchers: 2
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Iter
A minimalist frontend for Gen AI Chat models, optimized for rapid prompt iteration.
- **🔒 Privacy first**: Credentials are stored in your browser. All requests directly sent to API with no middleman. Absolutely no tracking.
- **⚡ API endpoint and model hot-swap**: Switch between different APIs and models without losing any chat progress
- **🦉 Adapts to OS/Browser default theme**: Dark theme for happy night owls
- **💅 Markdown parser**: Built-in syntax highlight and copy button for code blocks
- **📋 Smart paste**: HTML pastes as markdown, images as input, and files as attachments
- **🧭 Artifacts**: Live edit and preview code blocks for SVG, HTML, Mermaid, TypeScript, and React
- **💻 Interpreter**: Process uploaded files with TypeScript, with access to NPM registry, a virtual File System, and an LLM prompt API.
- **🖱️ Cursor chat**: Precisely edit the selected text
- **📸 Vision input**: Handle visual inputs with multi-modal models
- **🎙️ Speech input**: Use microphone to input text that can be mixed with typed message
- **📋 Document input**: Interpret PDF and text files without conversion## Screenshots
Create a runnable program from text
Recreate the UI of Airbnb with a single screenshot
## Supported model providers
- OpenAI\*
- ✅ GPT-5
- ✅ GPT-5-mini
- ✅ GPT-5-nano
- ✅ codex-mini
- ✅ o4-mini
- ✅ o3-pro
- ✅ o3
- ✅ o3-mini
- ✅ GPT-4.5-preview
- ✅ GPT-4.1
- ✅ GPT-4.1-mini
- ✅ GPT-4.1-nano
- ✅ GPT-4o
- ✅ GPT-4o-mini
- Anthropic
- ✅ Claude Opus 4.1
- ✅ Claude Opus 4
- ✅ Claude Sonnet 4
- ✅ Claude 3.7 Sonnet
- ✅ Claude 3.5 Sonnet
- ✅ Claude 3.5 Haiku
- Google Generative AI
- ✅ Gemini 2.5 Pro
- ✅ Gemini 2.5 Flash
- ✅ Gemini 2.5 Flash Preview
- ✅ Gemini 2.5 Flash Lite
- ✅ Gemini 2.0 Flash
- ✅ Gemini 2.0 Flash Lite
- ✅ Gemini 2.0 Flash Thinking
- xAI\*\*
- ✅ Grok 4
- ✅ Grok 3
- ✅ Grok 3 Fast
- ✅ Grok 3 Mini
- ✅ Grok 3 Mini Fast
- OpenRouter
- All chat models\*See detailed support matrix for [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/responses?tabs=python-secure#responses-api)
\*\*xAI models do not support PDF document## Keyboard shortcuts
Mac users, please use ⌘ instead of Ctrl
| Action | Shortcut |
| -------------------- | --------------------------------------------------------------- |
| Send message | Ctrl + Enter (in any textarea) |
| Abort action | Escape (when streaming response) |
| Dictate | Shift + Space (hold to talk) |
| Open response editor | Enter or double click (when focusing response block) |
| Open artifact editor | Enter or double click (when focusing artifact block) |
| Toggle cursor chat | Ctrl + K (in artifact or response editor) |
| Rerun artifact | Ctrl + Enter (in artifact editor) |
| Exit editor | Escape (in artifact or response editor) |
| Select up/down | ↑ / ↓ |
| Create backup | Ctrl + S |
| Restore backup | Ctrl + O |
| Export | Ctrl + Shift + E |
| Import | Ctrl + Shift + O |## Directives
Directives force the LLM to generate code that performs specific tasks. The code will take effect after manually run in the editor.
### `Run` directive
Include a `run` block in the user message to force the LLM to generate code and output files.
````
```run```
````The run block can take optional directives to expose additional APIs to the generated code
**`llm`**: Generate code that can prompt the active LLM model.
````
```run llm```
````### `Edit` directive
Include an `edit` block in the user message to force the LLM to generate code that edits the nearest assistant message. Other messages will be hidden.
````
```edit```
````## Attachments
You can copy/paste or upload files into each message in one of the following formats:
- **Embedded**: LLM can see the image, PDF, or text content. If using `run` directive, LLM can write code that accesses the content as a file.
- **External**: LLM can only see the metadata of the file (name, size, type, etc.). If using `run` directive, LLM can write code that accesses the content as a file without reading the content.