https://github.com/1luvc0d3/metabase-mcp
MCP server connecting Claude to Metabase for natural language data analysis, dashboard management, and SQL queries
https://github.com/1luvc0d3/metabase-mcp
anthropic claude data-analysis mcp metabase model-context-protocol natural-language sql
Last synced: 6 days ago
JSON representation
MCP server connecting Claude to Metabase for natural language data analysis, dashboard management, and SQL queries
- Host: GitHub
- URL: https://github.com/1luvc0d3/metabase-mcp
- Owner: 1luvc0d3
- License: mit
- Created: 2026-04-11T22:36:09.000Z (15 days ago)
- Default Branch: main
- Last Pushed: 2026-04-19T05:14:52.000Z (8 days ago)
- Last Synced: 2026-04-19T06:27:29.752Z (8 days ago)
- Topics: anthropic, claude, data-analysis, mcp, metabase, model-context-protocol, natural-language, sql
- Language: TypeScript
- Homepage: https://www.npmjs.com/package/@ai-1luvc0d3/metabase-mcp
- Size: 348 KB
- Stars: 2
- Watchers: 0
- Forks: 0
- Open Issues: 6
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
README
# metabase-mcp
[](https://www.npmjs.com/package/@ai-1luvc0d3/metabase-mcp)
[](https://www.npmjs.com/package/@ai-1luvc0d3/metabase-mcp)
[](https://github.com/1luvc0d3/metabase-mcp/actions/workflows/ci.yml)
[](LICENSE)
[](https://nodejs.org)
[](https://glama.ai/mcp/servers/1luvc0d3/metabase-mcp)
The **write-enabled, AI-augmented** [MCP](https://modelcontextprotocol.io/) server for [Metabase](https://www.metabase.com/) — create dashboards, ask questions in plain English, and get automated insights through Claude, on any Metabase version.
## Why This One?
Metabase shipped an [official MCP server in v0.60](https://www.metabase.com/docs/latest/ai/mcp) focused on read and search. This server complements it with **write operations, AI-generated insights, production security controls, and support for Metabase versions older than v0.60**.
| Capability | @ai-1luvc0d3/metabase-mcp | Metabase Official (v0.60+) | Other community servers |
|---|:--:|:--:|:--:|
| Read dashboards / cards / databases | ✅ | ✅ | ✅ |
| Write ops (create/update/delete cards, dashboards, collections) | ✅ | ❌ | partial |
| Batch execution (parallel multi-op in one call) | ✅ | ❌ | ❌ |
| Workflow pipelines (chained steps with output references) | ✅ | ❌ | ❌ |
| Natural language → SQL (+ explain / optimize / validate) | ✅ | partial | ❌ |
| Automated insights & trend analysis | ✅ | ❌ | ❌ |
| SQL injection guardrails | ✅ | n/a | ❌ |
| Tiered rate limiting (read / write / LLM) | ✅ | n/a | ❌ |
| Audit logging with risk levels | ✅ | n/a | ❌ |
| Token-optimized compact responses (default) | ✅ | ❌ | partial |
| Server modes (read / write / full) | ✅ | ❌ | ❌ |
| Works on Metabase < v0.60 (no upgrade required) | ✅ | ❌ | varies |
| OAuth per-user permission scoping | ❌ (API key) | ✅ | varies |
**Use this if:** you want Claude to *create* content in Metabase, you want AI-generated insights on query results, or you're on a Metabase version older than v0.60.
**Use Metabase's official MCP if:** you're on v0.60+, only need read/search, and want per-user permission scoping via OAuth.
## Features
- **30 tools** across read, batch, workflow, write, NLQ, and insight categories
- **Batch execution** -- run up to 20 read operations in parallel in a single call
- **Workflow pipelines** -- chain tools sequentially with `$stepName.path` output references between steps
- **Compact responses by default** -- all tools return compact JSON (~50% token reduction); opt into pretty-printing with `format: "default"`
- **Natural language to SQL** -- ask questions, get SQL + results (powered by Claude)
- **SQL guardrails** -- injection detection, DDL/DML blocking, dangerous pattern enforcement
- **Tiered rate limiting** -- configurable per-minute limits for read, write, and LLM operations
- **Audit logging** -- every operation logged with risk assessment
- **Three server modes** -- `read` (safe default), `write`, or `full` (with AI insights)
- **Schema caching** -- fast NLQ context for large databases
## Quick Start
### One-click install (recommended)
1. Download the latest `metabase-mcp-*.mcpb` from [GitHub Releases](https://github.com/1luvc0d3/metabase-mcp/releases/latest)
2. Double-click to install in Claude Desktop
3. Enter your Metabase URL and API key when prompted — stored securely in the OS keychain
### Using npx
```bash
npx @ai-1luvc0d3/metabase-mcp
```
### Manual install
```bash
npm install -g @ai-1luvc0d3/metabase-mcp
metabase-mcp
```
### From source
```bash
git clone https://github.com/1luvc0d3/metabase-mcp.git
cd metabase-mcp
npm install
npm run build
npm start
```
## Configuration
Set environment variables or create a `.env` file (see `.env.example`):
| Variable | Required | Default | Description |
|----------|----------|---------|-------------|
| `METABASE_URL` | Yes | - | Your Metabase instance URL |
| `METABASE_API_KEY` | Yes | - | Metabase API key |
| `MCP_MODE` | No | `read` | Server mode: `read`, `write`, or `full` |
| `ANTHROPIC_API_KEY` | No | - | Enables NLQ and insight tools |
| `METABASE_TIMEOUT` | No | `30000` | Request timeout (ms) |
| `METABASE_MAX_ROWS` | No | `10000` | Max rows returned per query |
| `LOG_LEVEL` | No | `info` | Logging: `debug`, `info`, `warn`, `error` |
| `RATE_LIMIT_REQUESTS_PER_MINUTE` | No | `60` | Rate limit threshold |
### Generate a Metabase API Key
1. Go to your Metabase instance
2. Navigate to **Admin** > **Settings** > **API Keys**
3. Click **Create API Key**
4. Copy the key and set it as `METABASE_API_KEY`
## Claude Desktop Integration
Add to your Claude Desktop config (`~/Library/Application Support/Claude/claude_desktop_config.json` on macOS):
```json
{
"mcpServers": {
"metabase": {
"command": "npx",
"args": ["@ai-1luvc0d3/metabase-mcp"],
"env": {
"METABASE_URL": "https://your-metabase.example.com",
"METABASE_API_KEY": "mb_your_api_key_here",
"MCP_MODE": "read"
}
}
}
}
```
## Server Modes
| Mode | Tools | Description |
|------|-------|-------------|
| `read` | 12 + NLQ | Read-only access, batch execution, and workflow pipelines |
| `write` | 22 + NLQ | Adds create/update/delete for cards, dashboards, collections |
| `full` | 30 | All tools including automated insights and trend analysis |
### Available Tools
**Read (always available)**
`list_dashboards`, `get_dashboard`, `list_cards`, `get_card`, `execute_card`, `list_databases`, `get_database_schema`, `execute_query`, `search_content`, `get_collections`
**Batch & Workflow (always available)**
`batch_execute`, `run_workflow`
**Write (write/full modes)**
`create_card`, `update_card`, `delete_card`, `create_dashboard`, `update_dashboard`, `delete_dashboard`, `add_card_to_dashboard`, `remove_card_from_dashboard`, `create_collection`, `move_to_collection`
**NLQ (requires ANTHROPIC_API_KEY)**
`nlq_to_sql`, `explain_sql`, `optimize_sql`, `validate_sql`
**Insights (full mode + ANTHROPIC_API_KEY)**
`ask_data`, `generate_insights`, `compare_metrics`, `trend_analysis`
## Examples
### 1. Exploring your data
> **You**: What dashboards do we have related to customer retention?
Claude uses `search_content` to find retention-related dashboards, then `get_dashboard` to summarize the key metrics. You see a ranked list with the most relevant results.
> **You**: Run the "Monthly Active Users" card for the last 90 days
Claude calls `list_cards` to locate the card, then `execute_card` with the appropriate time filter. Results come back as a table you can ask follow-up questions about ("what was the biggest dip and when?").
### 2. Ad-hoc SQL with safety rails
> **You**: Show me the top 10 products by revenue last quarter from the sales database
Claude calls `list_databases` to find the sales database, `get_database_schema` to inspect the relevant tables, then generates and runs a `SELECT` query via `execute_query`. The query is validated against the SQL guardrails (no `DROP`/`DELETE`/`UNION`, single statement only) before execution. Audit log entry is written with the query and row count.
> **You**: DROP TABLE users
Request is blocked. Claude surfaces: *"Blocked SQL pattern detected: DROP — this operation is not allowed."* The block is logged as a high-risk audit event.
### 3. Natural language to SQL (requires ANTHROPIC_API_KEY)
> **You**: Which support agents closed the most tickets this week, and how does that compare to last week?
Claude uses `nlq_to_sql` with the database schema as context to generate a comparative SQL query. You can ask it to `explain_sql` in plain English before running, or `optimize_sql` to suggest performance improvements — all before hitting your database.
### 4. Saving a reusable query as a card (write mode)
> **You**: Save the MAU trend query we just ran as a card called "MAU — Last 90 Days" in the Growth collection
Claude calls `get_collections` to find "Growth", then `create_card` with your validated SQL. The card now lives in your Metabase library and can be re-executed by name in future conversations via `execute_card` — no LLM tokens spent on re-generating the query.
### 5. Batch execution — parallel data gathering
> **You**: Get me the details for dashboards 1, 3, and 7, plus the schema for the sales database
Claude uses `batch_execute` to run all four operations in parallel in a single call:
```json
{
"operations": [
{ "tool": "get_dashboard", "args": { "dashboard_id": 1 } },
{ "tool": "get_dashboard", "args": { "dashboard_id": 3 } },
{ "tool": "get_dashboard", "args": { "dashboard_id": 7 } },
{ "tool": "get_database_schema", "args": { "database_id": 2 } }
]
}
```
One tool call instead of four. Results come back with per-operation success/failure, so partial failures don't block the rest.
### 6. Workflow pipelines — chained multi-step operations
> **You**: Find dashboards about revenue, get the first one's cards, and run the top card
Claude uses `run_workflow` to chain the steps with output references:
```json
{
"steps": [
{ "name": "find", "tool": "search_content", "args": { "query": "revenue", "type": "dashboard" } },
{ "name": "dash", "tool": "get_dashboard", "args": { "dashboard_id": "$find.results[0].id" } },
{ "name": "data", "tool": "execute_card", "args": { "card_id": "$dash.dashcards[0].card_id" } }
]
}
```
Each step can reference results from previous steps using `$stepName.path[index].field` syntax. One round trip instead of three back-and-forth exchanges.
### 7. Automated insights on query results (full mode)
> **You**: Run last quarter's revenue query and tell me what's interesting
Claude uses `execute_query` to run the query, then `generate_insights` which asks the Claude API to identify trends, outliers, and recommendations. You get a structured summary: headline number, 3-5 bullet points, and suggested follow-up questions.
> **Note on data privacy**: `generate_insights`, `ask_data`, `compare_metrics`, and `trend_analysis` send query result rows to the Anthropic API for analysis. See [Data Privacy Note](#data-privacy-note) for details.
## Security
This server is designed for production use with multiple layers of protection:
- **SQL Guardrails**: Only `SELECT` and `WITH` queries are allowed by default. DDL/DML statements (`DROP`, `DELETE`, `INSERT`, etc.) are blocked. Injection patterns (UNION, comments, multi-statement, file ops, time-based attacks) are detected and rejected.
- **Tiered Rate Limiting**: Separate limits for read (120/min), write (30/min), and LLM (20/min) operations.
- **Audit Logging**: Every operation is logged with risk assessment (low/medium/high). Sensitive fields are automatically redacted. Log files are created with secure permissions (owner-only read/write).
- **Secret Isolation**: API keys are never exposed to tool handlers. Error responses from Metabase are sanitized to prevent credential leakage.
- **Redirect Protection**: API key headers are never forwarded on HTTP redirects.
### Data Privacy Note
When using NLQ or insight tools (`ask_data`, `generate_insights`, etc.), **query result data is sent to the Anthropic API** for analysis. If your queries return sensitive data (PII, financial records, etc.), that data will be processed by Claude. Consider this when enabling NLQ features on databases containing sensitive information.
## Privacy Policy
**What this extension collects:**
- Your Metabase API key and URL (stored locally in the OS keychain — never transmitted to us)
- Your Anthropic API key, if provided (stored locally in the OS keychain — never transmitted to us)
- No telemetry, analytics, or usage data is collected by this extension
**What this extension transmits:**
- All Metabase API calls (queries, dashboards, cards) go directly from your machine to your own Metabase instance
- NLQ/insight tool usage sends your natural-language question, database schema context, and query result samples to the Anthropic API for processing (governed by [Anthropic's privacy policy](https://www.anthropic.com/legal/privacy))
- If you don't provide an Anthropic API key, no data is sent to Anthropic — NLQ and insight tools are simply disabled
**Data retention:**
- This extension does not retain any data. Audit logs (if enabled via `AUDIT_LOG_FILE`) are written to your local filesystem only, with owner-only permissions (0600)
**Third-party privacy policies:**
- [Metabase Privacy Policy](https://www.metabase.com/privacy)
- [Anthropic Privacy Policy](https://www.anthropic.com/legal/privacy)
**Reporting security issues:** See [SECURITY.md](SECURITY.md) for responsible disclosure.
## Troubleshooting
### "Cannot connect to Metabase" / 401 errors
- Verify `METABASE_URL` is correct and reachable (test: `curl $METABASE_URL/api/health`)
- Verify `METABASE_API_KEY` is valid (regenerate in Metabase Admin > Settings > API Keys if needed)
- The API key must have permissions for the databases you want to query
### "Blocked SQL pattern detected" errors
- Only `SELECT` and `WITH` queries are allowed by default
- Even inside a `SELECT`, patterns like `UNION SELECT`, SQL comments (`--`, `/* */`), `xp_cmdshell`, `INTO OUTFILE`, etc. are blocked
- To execute DML (`INSERT`, `UPDATE`, `DELETE`), you must run in `write` or `full` mode AND the SQL must still pass guardrails (it won't — by design)
### "Rate limit exceeded" errors
- Default limits: 120 reads/min, 30 writes/min, 20 LLM calls/min
- Adjust with `RATE_LIMIT_REQUESTS_PER_MINUTE` env var
- Wait for the retry-after period shown in the error
### NLQ tools unavailable
- Requires `ANTHROPIC_API_KEY` — verify it's set
- Check it starts with `sk-` and has remaining credits
- Insight tools additionally require `MCP_MODE=full`
### Claude Desktop: extension installed but tools not appearing
- Fully quit and restart Claude Desktop
- Check logs: `~/Library/Logs/Claude/mcp*.log` on macOS
- Verify `node --version` is >= 18
## Feedback Wanted
This project is young and your input shapes where it goes next — especially now that Metabase has shipped its own official MCP. A minute of your time helps a lot:
- **Is this useful for your workflow?** Start a [GitHub Discussion](https://github.com/1luvc0d3/metabase-mcp/discussions) or [star the repo](https://github.com/1luvc0d3/metabase-mcp) — tells me where to invest.
- **Which tools do you actually use?** Let me know in [Discussions](https://github.com/1luvc0d3/metabase-mcp/discussions) — helps prioritize what stays, what grows.
- **Hit a bug?** [File an issue](https://github.com/1luvc0d3/metabase-mcp/issues/new) with your Metabase version, `MCP_MODE`, and reproduction steps.
- **Missing a feature?** [Request it](https://github.com/1luvc0d3/metabase-mcp/issues/new) — especially something the [official Metabase MCP](https://www.metabase.com/docs/latest/ai/mcp) doesn't cover.
- **Running in production?** I'd genuinely love to hear about it — open a Discussion or drop a note on the repo.
## Support
- **Bug reports / feature requests:** [GitHub Issues](https://github.com/1luvc0d3/metabase-mcp/issues)
- **Questions / general feedback:** [GitHub Discussions](https://github.com/1luvc0d3/metabase-mcp/discussions)
- **Security vulnerabilities:** [Private disclosure](https://github.com/1luvc0d3/metabase-mcp/security/advisories/new) — see [SECURITY.md](SECURITY.md)
- **Response time:** typically within 5 business days
## Development
```bash
npm install # Install dependencies
npm run build # Compile TypeScript
npm run dev # Watch mode
npm test # Run all tests
npm run type-check # Type checking
npm run lint # Linting
```
See [CONTRIBUTING.md](CONTRIBUTING.md) for more details.
## License
[MIT](LICENSE)