https://github.com/vava-nessa/free-coding-models
Find, benchmark and install in CLI 174 FREE coding LLM models across 23 providers in real time
https://github.com/vava-nessa/free-coding-models
ai deepseek free free-ai freeai gpt gptoss kimi nim nvidia nvidia-nim nvidia-nim-api nvidia-nims openclaw opencode
Last synced: 5 days ago
JSON representation
Find, benchmark and install in CLI 174 FREE coding LLM models across 23 providers in real time
- Host: GitHub
- URL: https://github.com/vava-nessa/free-coding-models
- Owner: vava-nessa
- License: other
- Created: 2026-02-20T23:12:50.000Z (about 2 months ago)
- Default Branch: main
- Last Pushed: 2026-03-27T20:55:13.000Z (14 days ago)
- Last Synced: 2026-03-28T02:44:48.278Z (14 days ago)
- Topics: ai, deepseek, free, free-ai, freeai, gpt, gptoss, kimi, nim, nvidia, nvidia-nim, nvidia-nim-api, nvidia-nims, openclaw, opencode
- Language: JavaScript
- Homepage:
- Size: 60 MB
- Stars: 734
- Watchers: 7
- Forks: 65
- Open Issues: 7
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- Funding: .github/FUNDING.yml
- License: LICENSE
- Agents: AGENTS.md
Awesome Lists containing this project
README
free-coding-models
๐ฆ Follow me on X (@vavanessadev) ๐
Find the fastest free coding model in seconds
Ping 174 models across 23 AI Free providers in real-time
Install Free API endpoints to your favorite AI coding tool:
๐ฆ OpenCode, ๐ฆ OpenClaw, ๐ Crush, ๐ชฟ Goose, ๐ Aider, ๐ Qwen Code, ๐คฒ OpenHands, โก Amp, ฯ Pi, ๐ฆ Rovo or โ Gemini in one keystroke
```bash
npm install -g free-coding-models
free-coding-models
```
create a free account on one of the [providers](#-list-of-free-ai-providers)
Why โข
Quick Start โข
Providers โข
Usage โข
TUI Keys โข
Contributing
Made with โค๏ธ and โ by Vanessa Depraute (aka Vava-Nessa)
---
## ๐ก Why this tool?
There are **174+ free coding models** scattered across 23 providers. Which one is fastest right now? Which one is actually stable versus just lucky on the last ping?
This CLI pings them all in parallel, shows live latency, and calculates a **live Stability Score (0-100)**. Average latency alone is misleading if a model randomly spikes to 6 seconds; the stability score measures true reliability by combining **p95 latency** (30%), **jitter/variance** (30%), **spike rate** (20%), and **uptime** (20%).
It then writes the model you pick directly into your coding tool's config โ so you go from "which model?" to "coding" in under 10 seconds.
---
## โก Quick Start
### ๐ข List of Free AI Providers
Create a free account on one provider below to get started:
**174 coding models** across 23 providers, ranked by [SWE-bench Verified](https://www.swebench.com).
| Provider | Models | Tier range | Free tier | Env var |
|----------|--------|-----------|-----------|--------|
| [NVIDIA NIM](https://build.nvidia.com) | 44 | S+ โ C | 40 req/min (no credit card needed) | `NVIDIA_API_KEY` |
| [iFlow](https://platform.iflow.cn) | 11 | S+ โ A+ | Free for individuals (no req limits, 7-day key expiry) | `IFLOW_API_KEY` |
| [ZAI](https://z.ai) | 7 | S+ โ S | Free tier (generous quota) | `ZAI_API_KEY` |
| [Alibaba DashScope](https://modelstudio.console.alibabacloud.com) | 8 | S+ โ A | 1M free tokens per model (Singapore region, 90 days) | `DASHSCOPE_API_KEY` |
| [Groq](https://console.groq.com/keys) | 10 | S โ B | 30โ50 RPM per model (varies by model) | `GROQ_API_KEY` |
| [Cerebras](https://cloud.cerebras.ai) | 7 | S+ โ B | Generous free tier (developer tier 10ร higher limits) | `CEREBRAS_API_KEY` |
| [SambaNova](https://sambanova.ai/developers) | 12 | S+ โ B | Dev tier generous quota | `SAMBANOVA_API_KEY` |
| [OpenRouter](https://openrouter.ai/keys) | 11 | S+ โ C | Free on :free: 50/day <$10, 1000/day โฅ$10 (20 req/min) | `OPENROUTER_API_KEY` |
| [Hugging Face](https://huggingface.co/settings/tokens) | 2 | S โ B | Free monthly credits (~$0.10) | `HUGGINGFACE_API_KEY` |
| [Together AI](https://api.together.ai/settings/api-keys) | 7 | S+ โ A- | Credits/promos vary by account (check console) | `TOGETHER_API_KEY` |
| [DeepInfra](https://deepinfra.com/login) | 2 | A- โ B+ | 200 concurrent requests (default) | `DEEPINFRA_API_KEY` |
| [Fireworks AI](https://fireworks.ai) | 2 | S | $1 credits โ 10 req/min without payment | `FIREWORKS_API_KEY` |
| [Mistral Codestral](https://codestral.mistral.ai) | 1 | B+ | 30 req/min, 2000/day | `CODESTRAL_API_KEY` |
| [Hyperbolic](https://app.hyperbolic.ai/settings) | 10 | S+ โ A- | $1 free trial credits | `HYPERBOLIC_API_KEY` |
| [Scaleway](https://console.scaleway.com/iam/api-keys) | 7 | S+ โ B+ | 1M free tokens | `SCALEWAY_API_KEY` |
| [Google AI Studio](https://aistudio.google.com/apikey) | 3 | B โ C | 14.4K req/day, 30/min | `GOOGLE_API_KEY` |
| [SiliconFlow](https://cloud.siliconflow.cn/account/ak) | 6 | S+ โ A | Free models: usually 100 RPM, varies by model | `SILICONFLOW_API_KEY` |
| [Cloudflare Workers AI](https://dash.cloudflare.com) | 6 | S โ B | Free: 10k neurons/day, text-gen 300 RPM | `CLOUDFLARE_API_TOKEN` + `CLOUDFLARE_ACCOUNT_ID` |
| [Perplexity API](https://www.perplexity.ai/settings/api) | 4 | A+ โ B | Tiered limits by spend (default ~50 RPM) | `PERPLEXITY_API_KEY` |
| [Replicate](https://replicate.com/account/api-tokens) | 1 | A- | 6 req/min (no payment) โ up to 3,000 RPM with payment | `REPLICATE_API_TOKEN` |
| [Rovo Dev CLI](https://www.atlassian.com/rovo) | 1 | S+ | 5M tokens/day (beta) | CLI tool ๐ฆ |
| [Gemini CLI](https://github.com/google-gemini/gemini-cli) | 3 | S+ โ A+ | 1,000 req/day | CLI tool โ |
| [OpenCode Zen](https://opencode.ai/zen) | 8 | S+ โ A+ | Free with OpenCode account | Zen models โจ |
> ๐ก One key is enough. Add more at any time with **`P`** inside the TUI.
### Tier scale
| Tier | SWE-bench | Best for |
|------|-----------|----------|
| **S+** | โฅ 70% | Complex refactors, real-world GitHub issues |
| **S** | 60โ70% | Most coding tasks, strong general use |
| **A+/A** | 40โ60% | Solid alternatives, targeted programming |
| **A-/B+** | 30โ40% | Smaller tasks, constrained infra |
| **B/C** | < 30% | Code completion, edge/minimal setups |
**โ Install and run:**
```bash
npm install -g free-coding-models
free-coding-models
```
On first run, you'll be prompted to enter your API key(s). You can skip providers and add more later with **`P`**.
Use โก๏ธ Command Palette! with **Ctrl+P**.
Need to fix contrast because your terminal theme is fighting the TUI? Press **`G`** at any time to cycle **Auto โ Dark โ Light**. The switch recolors the full interface live: table, Settings, Help, Smart Recommend, Feedback, and Changelog.
**โก Pick a model and launch your tool:**
```
โโ navigate โ Enter to launch
```
The model you select is automatically written into your tool's config (๐ฆ OpenCode, ๐ฆ OpenClaw, ๐ Crush, etc.) and the tool opens immediately. Done.
If the active CLI tool is missing, FCM now catches it before launch, offers a tiny Yes/No install prompt, installs the tool with its official global command, then resumes the same model launch automatically.
> ๐ก You can also run `free-coding-models --goose --tier S` to pre-filter to S-tier models for Goose before the TUI even opens.
## ๐ Usage
### Common scenarios
```bash
# "I want the most reliable model right now"
free-coding-models --fiable
# "I want to configure Goose with an S-tier model"
free-coding-models --goose --tier S
# "I want NVIDIA's top models only"
free-coding-models --origin nvidia --tier S
# "Start with an elite-focused preset, then adjust filters live"
free-coding-models --premium
# "I want to script this โ give me JSON"
free-coding-models --tier S --json | jq -r '.[0].modelId'
# "I want to configure OpenClaw with Groq's fastest model"
free-coding-models --openclaw --origin groq
```
### Tool launcher flags
| Flag | Launches |
|------|----------|
| `--opencode` | ๐ฆ OpenCode CLI |
| `--opencode-desktop` | ๐ฆ OpenCode Desktop |
| `--openclaw` | ๐ฆ OpenClaw |
| `--crush` | ๐ Crush |
| `--goose` | ๐ชฟ Goose |
| `--aider` | ๐ Aider |
| `--qwen` | ๐ Qwen Code |
| `--openhands` | ๐คฒ OpenHands |
| `--amp` | โก Amp |
| `--pi` | ฯ Pi |
| `--rovo` | ๐ฆ Rovo Dev CLI |
| `--gemini` | โ Gemini CLI |
Press **`Z`** in the TUI to cycle between tools without restarting.
### CLI-Only Tools
**๐ฆ Rovo Dev CLI**
- Provider: [Atlassian Rovo](https://www.atlassian.com/rovo)
- Install: [Installation Guide](https://support.atlassian.com/rovo/docs/install-and-run-rovo-dev-cli-on-your-device/)
- Free tier: 5M tokens/day (beta, requires Atlassian account)
- Model: Claude Sonnet 4 (72.7% SWE-bench)
- Launch: `free-coding-models --rovo` or press `Z` until Rovo mode
- Features: Jira/Confluence integration, MCP server support
**โ Gemini CLI**
- Provider: [Google Gemini](https://github.com/google-gemini/gemini-cli)
- Install: `npm install -g @google/gemini-cli`
- Free tier: 1,000 requests/day (personal Google account, no credit card)
- Models: Gemini 3 Pro (76.2% SWE-bench), Gemini 2.5 Pro, Gemini 2.5 Flash
- Launch: `free-coding-models --gemini` or press `Z` until Gemini mode
- Features: OpenAI-compatible API support, MCP server support, Google Search grounding
**Note:** When launching these tools via `Z` key or command palette, if the current mode doesn't match the tool, you'll see a confirmation alert asking to switch to the correct tool before launching.
### OpenCode Zen Free Models
[OpenCode Zen](https://opencode.ai/zen) is a hosted AI gateway offering 8 free coding models exclusively through OpenCode CLI and OpenCode Desktop. These models are **not** available through other tools.
| Model | Tier | SWE-bench | Context |
|-------|------|-----------|---------|
| Big Pickle | S+ | 72.0% | 200k |
| MiniMax M2.5 Free | S+ | 80.2% | 200k |
| MiMo V2 Pro Free | S+ | 78.0% | 1M |
| MiMo V2 Omni Free | S | 64.0% | 128k |
| MiMo V2 Flash Free | S+ | 73.4% | 256k |
| Nemotron 3 Super Free | A+ | 52.0% | 128k |
| GPT 5 Nano | S | 65.0% | 128k |
| Trinity Large Preview Free | S | 62.0% | 128k |
To use Zen models: sign up at [opencode.ai/auth](https://opencode.ai/auth) and enter your Zen API key via `P` (Settings). Zen models appear in the main table and auto-switch to OpenCode CLI on launch.
### Tool Compatibility
When a tool mode is active (via `Z`), models incompatible with that tool are highlighted with a dark red background so you can instantly see which models work with your current tool.
| Model Type | Compatible Tools |
|------------|-----------------|
| Regular (NVIDIA, Groq, etc.) | All tools except ๐ฆ Rovo and โ Gemini |
| Rovo | ๐ฆ Rovo Dev CLI only |
| Gemini | โ Gemini CLI only |
| OpenCode Zen | ๐ฆ OpenCode CLI and ๐ฆ OpenCode Desktop only |
โ **[Full flags reference](./docs/flags.md)**
---
## โจ๏ธ TUI Keys
### Keyboard
| Key | Action |
|-----|--------|
| `โโ` | Navigate models |
| `Enter` | Launch selected model in active tool |
| `Z` | Cycle target tool |
| `T` | Cycle tier filter |
| `D` | Cycle provider filter |
| `E` | Toggle configured-only mode |
| `F` | Favorite / unfavorite model |
| `Y` | Toggle favorites mode (`Normal filter/sort` default โ `Pinned + always visible`) |
| `X` | Clear active custom text filter |
| `G` | Cycle global theme (`Auto โ Dark โ Light`) |
| `Ctrl+P` | Open โก๏ธ command palette (search + run actions) |
| `R/S/C/M/O/L/A/H/V/B/U` | Sort columns |
| `Shift+U` | Update to latest version (when update available) |
| `P` | Settings (API keys, providers, updates, theme) |
| `Q` | Smart Recommend overlay |
| `N` | Changelog |
| `W` | Cycle ping cadence |
| `I` | Feedback / bug report |
| `K` | Help overlay |
| `Ctrl+C` | Exit |
### Mouse
| Action | Result |
|--------|--------|
| **Click column header** | Sort by that column |
| **Click Tier header** | Cycle tier filter |
| **Click CLI Tools header** | Cycle tool mode |
| **Click model row** | Move cursor to model |
| **Double-click model row** | Select and launch model |
| **Right-click model row** | Toggle favorite |
| **Scroll wheel** | Navigate table / overlays / palette |
| **Click footer hotkey** | Trigger that action |
| **Click update banner** | Install latest version and relaunch |
| **Click command palette item** | Select item (double-click to confirm) |
| **Click recommend option** | Select option (double-click to confirm) |
| **Click outside modal** | Close command palette |
โ **[Stability score & column reference](./docs/stability.md)**
---
## โจ Features
- **Parallel pings** โ all 174 models tested simultaneously via native `fetch`
- **Adaptive monitoring** โ 2s burst for 60s โ 10s normal โ 30s idle
- **Stability score** โ composite 0โ100 (p95 latency, jitter, spike rate, uptime)
- **Smart ranking** โ top 3 highlighted ๐ฅ๐ฅ๐ฅ
- **Favorites** โ star models with `F`, persisted across sessions, default to normal rows, and switch display mode with `Y` (pinned+sticky vs normal rows)
- **Configured-only default** โ only shows providers you have keys for
- **Keyless latency** โ models ping even without an API key (show ๐ NO KEY)
- **Smart Recommend** โ questionnaire picks the best model for your task type
- **โก๏ธ Command Palette** โ `Ctrl+P` opens a searchable action launcher for filters, sorting, overlays, and quick toggles
- **Install Endpoints** โ push a full provider catalog into any tool's config (from Settings `P` or โก๏ธ Command Palette)
- **Missing tool bootstrap** โ detect absent CLIs, offer one-click install, then continue the selected launch automatically
- **Tool compatibility matrix** โ incompatible rows highlighted in dark red when a tool mode is active
- **OpenCode Zen models** โ 8 free models exclusive to OpenCode CLI/Desktop, powered by the Zen AI gateway
- **Width guardrail** โ shows a warning instead of a broken table in narrow terminals
- **Readable everywhere** โ semantic theme palette keeps table rows, overlays, badges, and help screens legible in dark and light terminals
- **Global theme switch** โ `G` cycles `auto`, `dark`, + `light` live without restarting
- **Auto-retry** โ timeout models keep getting retried
- **Aggressive update nudging** โ fluorescent green banner when an update is available, impossible to miss, Shift+U hotkey, command palette entry, background re-check every 5 min, mid-session updates the banner live without restarting
- **Last release timestamp** โ light pink footer shows `Last release: Mar 27, 2026, 09:42 PM` from npm so users know how fresh the data is
---
## ๐ Contributing
We welcome contributions โ issues, PRs, new provider integrations.
**Q:** How accurate are the latency numbers?
**A:** Real round-trip times measured by your machine. Results depend on your network and provider load at that moment.
**Q:** Can I add a new provider?
**A:** Yes โ see [`sources.js`](./sources.js) for the model catalog format.
โ **[Development guide](./docs/development.md)** ยท **[Config reference](./docs/config.md)** ยท **[Tool integrations](./docs/integrations.md)**
---
## ๐ง Support
[GitHub Issues](https://github.com/vava-nessa/free-coding-models/issues) ยท [Discord](https://discord.gg/ZTNFHvvCkU)
---
## ๐ License
MIT ยฉ [vava](https://github.com/vava-nessa)
---
Contributors
vava-nessa ยท
erwinh22 ยท
whit3rabbit ยท
skylaweber ยท
PhucTruong-ctrl
Anonymous usage data collected to improve the tool. No personal information ever.