An open API service indexing awesome lists of open source software.

https://github.com/intertwine/vertex-mcp-chatbot

Vertex AI chatbot with MCP integration
https://github.com/intertwine/vertex-mcp-chatbot

chatbot gcp gemini google-cloud google-genai llm mcp-server model-context-protocol vertex-ai

Last synced: about 2 months ago
JSON representation

Vertex AI chatbot with MCP integration

Awesome Lists containing this project

README

          

# Vertex MCP Chatbot

[![Python](https://img.shields.io/badge/python-3.10%20%7C%203.11%20%7C%203.12-blue)](https://www.python.org)
[![MCP](https://img.shields.io/badge/MCP-blue.svg)](https://modelcontextprotocol.io)
[![Python SDK](https://img.shields.io/badge/Python%20SDK-green.svg)](https://github.com/modelcontextprotocol/python-sdk)
[![Specification](https://img.shields.io/badge/specification-gray.svg)](https://spec.modelcontextprotocol.io/specification/)
[![Documentation](https://img.shields.io/badge/documentation-purple.svg)](https://modelcontextprotocol.io)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

An interactive command-line chatbot for Google Cloud Vertex AI, supporting both **Anthropic's Claude** and **Google Gemini** models. Choose your preferred AI provider with a simple flag. The chatbot features full MCP (Model Context Protocol) integration with Claude, allowing autonomous tool discovery and execution during conversations. Supports Vertex AI or direct Anthropic API access, with automatic tool calling, multi-turn conversations, and comprehensive error handling.

## Features

- 🤖 **Interactive Chat Interface**: Clean, intuitive terminal UI with rich formatting
- 🔀 **Multi-Provider Support**: Choose between Claude (default) or Gemini with `--provider` flag
- 🧠 **Claude via Vertex AI**: Anthropic's Claude Sonnet 4.5 through Google Cloud Vertex AI
- 🌟 **Gemini Integration**: Google's Gemini 2.5 Flash available as alternative provider
- 🔧 **Automatic Tool Calling** (Claude): Autonomous MCP tool discovery and execution during conversations
- 📝 **Markdown Support**: Responses are rendered with proper markdown formatting
- 💾 **Persistent History**: Conversation history saved between sessions on disk
- 🎨 **Rich Terminal UI**: Colorful, well-formatted output using Rich library
- 🔧 **Flexible Configuration**: Use Vertex AI (with GCP credentials) or direct Anthropic API
- 🔌 **Full MCP Integration**: Load tools from `mcp_config.json` with automatic tool discovery and execution

### MCP Features

The chatbot includes comprehensive MCP (Model Context Protocol) support:

- ✅ **Tool Execution**: Claude automatically uses MCP tools during conversations
- ✅ **Resource Access**: Read files, APIs, and other resources via URIs
- ✅ **Prompt Templates**: Use pre-defined templates for common tasks
- ✅ **Multiple Transports**: stdio, HTTP, and SSE protocols supported
- ✅ **Multi-Server Support**: Connect to multiple MCP servers simultaneously
- ✅ **Authentication**: OAuth 2.0, Basic Auth, and custom headers
- ✅ **Reliability**: Automatic retry with exponential backoff
- ✅ **Priority System**: Smart conflict resolution when servers offer similar tools

## Quick Start

```bash
# 1. Clone and setup
git clone https://github.com/intertwine/vertex-mcp-chatbot.git
cd vertex-mcp-chatbot
make setup # Install dependencies and create .env

# 2. Configure (edit .env file with your GCP project settings)
nano .env # or use your preferred editor

# 3. Authenticate and run
make auth # Authenticate with Google Cloud
make run # Start the chatbot with quiet MCP logging!
```

For all available commands, run:

```bash
make help
```

## Prerequisites

- Python 3.10 or higher
- [uv](https://docs.astral.sh/uv/) package manager installed
- Access to Google Cloud Vertex AI with Anthropic Claude enabled (MCP-capable regions)
- Google Cloud CLI (`gcloud`) installed and authenticated

## Installation

1. Clone this repository:

```bash
git clone https://github.com/intertwine/vertex-mcp-chatbot.git
cd vertex-mcp-chatbot
```

1. Run the quick setup (installs dependencies and creates .env file):

```bash
make setup
```

> This runs `uv sync` and copies `.env.example` to `.env`

1. Edit `.env` to override default project settings:

```bash
GOOGLE_CLOUD_PROJECT='your-gcp-project-id'
GOOGLE_CLOUD_LOCATION='us-east1'
```

1. Authenticate with Google Cloud:

```bash
make auth
```

> This runs `gcloud auth application-default login`

**Alternative manual setup:**

```bash
# Install dependencies manually
uv sync

# Copy environment file
cp .env.example .env

# Authenticate manually
gcloud auth application-default login
```

## Usage

### Basic Usage

Start the chatbot with the default provider (Claude):

```bash
make run
```

### Choosing Your AI Provider

**Use Claude (default)**:

```bash
make run # Uses Claude Sonnet 4.5 (quiet MCP logging)
make run-claude # Same as above (quiet MCP logging)
make run-opus # Uses Claude Opus 4.1 (quiet MCP logging)
make run-haiku # Uses Claude Haiku 4.5 (quiet MCP logging)
```

**Use Gemini**:

```bash
make run-gemini # Uses Gemini 2.5 Flash (quiet MCP logging)
make run-gemini-pro # Uses Gemini 2.5 Pro (quiet MCP logging)
```

**With verbose logging**:

```bash
make run-verbose # Claude with INFO-level MCP logging
make run-debug # Claude with DEBUG-level MCP logging
```

**Alternative (direct uv commands)**:

```bash
uv run main.py # Claude Sonnet 4.5 (default)
uv run main.py --model claude-opus-4-1-20250805 # Claude Opus
uv run main.py --model claude-haiku-4-5 # Claude Haiku 4.5
uv run main.py --provider gemini # Gemini 2.5 Flash
uv run main.py --provider gemini --model gemini-2.5-pro # Gemini 2.5 Pro

# Logging control options
uv run main.py --quiet-mcp # Suppress MCP server info logging
uv run main.py --log-level DEBUG # Show detailed MCP debug information
uv run main.py --log-level ERROR # Only show errors from MCP operations
```

**Key Differences**:

- **Claude**: Full MCP tool-calling support with autonomous tool discovery and execution
- **Gemini**: Fast, efficient responses; MCP servers can be configured but tool calling is manual

Both providers offer the same intuitive terminal interface, markdown rendering, and persistent conversation history.

### Configuring Claude via Vertex AI

The CLI automatically attempts to run Claude through Google Cloud Vertex AI using Application Default Credentials. To customise the behaviour, set any of the following environment variables before launching the REPL (they can be stored in `.env`):

- `CLAUDE_VERTEX_ENABLED` – set to `false` to fall back to the public Anthropic API (requires `ANTHROPIC_API_KEY`)
- `CLAUDE_VERTEX_PROJECT` – override the GCP project used for billing (`GOOGLE_CLOUD_PROJECT` is used otherwise)
- `CLAUDE_VERTEX_LOCATION` – override the Vertex region (defaults to `GOOGLE_CLOUD_LOCATION` or `us-east1`)
- `CLAUDE_VERTEX_BASE_URL` – fully override the Vertex endpoint if you need to point at a proxy
- `CLAUDE_MODEL` – override the default Claude model (default: `claude-sonnet-4-5-20250929`)
- `CLAUDE_API_VERSION` – override the Anthropic API version header (default: `2023-06-01`)

See [docs/claude-agent.md](docs/claude-agent.md) for an end-to-end walkthrough that covers authentication, MCP configuration, and troubleshooting tips when connecting Claude through Vertex AI, plus guidance on when to prefer the legacy Gemini provider.

### MCP Configuration

To use MCP features, create an `mcp_config.json` file in the project root. See the [MCP User Guide](docs/mcp-guide.md) for detailed configuration instructions and examples.

#### Example: Autonomous Tool Usage

When you connect to an MCP server, Claude automatically sees available tools and can use them during conversations:

```text
You > /mcp connect filesystem
✅ Connected to MCP server: filesystem

You > Please list the files in the current directory

Claude: The current directory contains the following files:
1. CODE_OF_CONDUCT.md (1.3 KB)
2. pyproject.toml (1.9 KB) - Python project configuration
3. README.md (24.6 KB) - Project documentation
4. main.py (1.7 KB) - Main Python script
...
```

Claude automatically:

- Discovered the `list_files` tool from the connected MCP server
- Decided to call it with appropriate parameters
- Received and processed the results
- Formatted them into a helpful response

No explicit tool invocation needed - Claude autonomously chooses when and how to use tools!

### Controlling MCP Logging

**Note**: The default `make run` targets now use `--quiet-mcp` for cleaner output. Use `make run-verbose` or `make run-debug` if you need more detailed logging.

The chatbot provides options to control the verbosity of MCP server logging:

**Suppress MCP logging (quiet mode):**

```bash
uv run main.py --quiet-mcp
```

This suppresses all informational logging from MCP operations, showing only errors.

**Adjust logging level:**

```bash
uv run main.py --log-level DEBUG # Show detailed debug information
uv run main.py --log-level INFO # Show informational messages
uv run main.py --log-level WARNING # Default - show warnings and above
uv run main.py --log-level ERROR # Show only errors
uv run main.py --log-level CRITICAL # Show only critical errors
```

These options are useful when:

- You want a cleaner output during tool execution (`--quiet-mcp`)
- You're debugging MCP server connections (`--log-level DEBUG`)
- You only care about errors (`--log-level ERROR`)

### Scrollable Content

When responses or content are too long for your terminal, the chatbot automatically switches to a scrollable view:

**Navigation Controls:**

- **↑/↓** or **j/k** - Scroll up/down line by line
- **Home/g** - Jump to the top of the content
- **End/G** - Jump to the bottom of the content
- **q/Esc** - Exit scrollable view and return to chat

**Features:**

- Automatically detects when content exceeds terminal height
- Works for:
- LLM responses
- `/history` command
- `/mcp tools` listings
- `/mcp resources` listings
- `/mcp prompts` listings
- Preserves all markdown formatting and styling
- Short content displays normally (no change in experience)

### Available Commands

While chatting, you can use these commands:

- `/help` - Show available commands and tips
- `/clear` - Clear the chat history and start fresh (resets the Claude session)
- `/history` - Display the full conversation history with markdown rendering
- `/system ` - Update the system instruction and restart the Claude agent
- `/quit` - Exit the chatbot

**MCP Commands** (when MCP is available):

- `/mcp connect ` - Connect to an MCP server from your config
- `/mcp list` - Show configured servers and their connection status
- `/mcp disconnect ` - Disconnect from an MCP server
- `/mcp tools` - Show available tools from all connected servers
- `/mcp resources` - Show available resources from all connected servers
- `/mcp prompts` - List available prompt templates from all connected servers
- `/mcp prompt ` - Use a specific prompt template

### MCP Tool Integration

When MCP servers are connected, their tools become automatically available during conversations. Claude will intelligently use these tools when appropriate to help answer your questions or perform tasks. You don't need to use special syntax - just chat naturally and Claude will:

- Recognize when a tool would be helpful
- Execute the appropriate tool with the right parameters
- Include the tool results in its response

For example, if you have a weather MCP server connected and ask "What's the weather like?", Claude will automatically use the weather tool to get current conditions.

### MCP Resource Integration

MCP resources are automatically read when you reference them by URI in your messages. This allows you to seamlessly include external data in your conversations with Claude:

- **Automatic Detection**: When you include a resource URI in your message, it's automatically detected
- **Transparent Reading**: The resource content is fetched and included in the context sent to Claude
- **Multiple Resources**: You can reference multiple resources in a single message
- **Standard URI Format**: Use standard URIs like `file:///path/to/data.json` or `http://example.com/api/data`

**Example:**

```text
You> Can you analyze the data in file:///home/user/sales_report.csv?

[The chatbot automatically reads the CSV file and includes its content in the prompt to Claude,
who can then analyze and discuss the data]
```

### MCP Prompt Templates

MCP servers can provide prompt templates that help structure interactions for specific tasks. These templates make it easy to perform complex operations with consistent formatting:

- **List Templates**: Use `/mcp prompts` to see all available templates
- **Use a Template**: Use `/mcp prompt ` to apply a template
- **Interactive Arguments**: The chatbot will prompt you for any required template arguments
- **Seamless Processing**: Filled templates are sent directly to Claude for processing

**Example:**

```text
You> /mcp prompts
Available prompts from code-analyzer:
- analyze_function: Analyze a function for complexity and suggest improvements
- review_pr: Review pull request changes and provide feedback
- explain_code: Explain how a piece of code works in simple terms

You> /mcp prompt analyze_function
Enter value for 'function_name': calculateTotalPrice
Enter value for 'context': This function processes shopping cart items

[The template is filled with your values and sent to Claude, who provides
a detailed analysis of the function based on the structured prompt]
```

### Example Session

```text
🚀 Starting Claude Agent REPL...
✅ Ready!

You> What is machine learning?

╭─ Claude ─────────────────────────────────────────────╮
│ │
│ Machine learning (ML) is a branch of artificial │
│ intelligence focused on building systems that can │
│ learn from data and improve their performance over │
│ time without being explicitly programmed for every │
│ scenario... │
│ │
╰──────────────────────────────────────────────────────╯

You> /system You are a patient tutor for high-school students.
System prompt updated.

You> Explain overfitting in one paragraph.

╭─ Claude ─────────────────────────────────────────────╮
│ │
│ Overfitting happens when a model memorises the │
│ training data instead of learning the underlying │
│ patterns, so it performs well on the data it has │
│ seen but poorly on new examples. A simple way to │
│ picture it is a student who only studies past exam │
│ answers: they may ace the practice questions yet │
│ struggle when the real test words things slightly │
│ differently. │
│ │
╰──────────────────────────────────────────────────────╯

You> /quit

👋 Goodbye!
```

## MCP Configuration (docs/mcp-guide.md)

### Basic Configuration

Create an `mcp_config.json` file in the project root:

```json
{
"servers": [
{
"name": "filesystem",
"transport": "stdio",
"command": ["python", "examples/mcp-servers/filesystem_server.py"]
},
{
"name": "weather-api",
"transport": "http",
"url": "http://localhost:8080/mcp",
"auth": {
"type": "basic",
"username": "user",
"password": "pass"
}
}
]
}
```

### Environment Variables

The MCP configuration supports environment variable substitution using `${VAR_NAME}` syntax. Variables are automatically loaded from your `.env` file:

```bash
# .env file
API_KEY=your-secret-key
OAUTH_CLIENT_SECRET=your-oauth-secret
```

```json
{
"servers": [
{
"name": "api-server",
"transport": "http",
"url": "https://api.example.com",
"headers": {
"Authorization": "Bearer ${API_KEY}"
}
}
]
}
```

You can also provide default values with `${VAR_NAME:-default}` syntax.

### Advanced Configuration Options

#### OAuth 2.0 Authentication

```json
{
"name": "github-api",
"transport": "http",
"url": "https://api.github.com/mcp",
"auth": {
"type": "oauth",
"authorization_url": "https://github.com/login/oauth/authorize",
"token_url": "https://github.com/login/oauth/access_token",
"client_id": "your-client-id",
"client_secret": "${OAUTH_CLIENT_SECRET}",
"scope": "repo read:user",
"redirect_uri": "http://localhost:8080/callback"
}
}
```

#### Connection Retry Configuration

```json
{
"name": "flaky-server",
"transport": "stdio",
"command": ["node", "server.js"],
"retry": {
"max_attempts": 5,
"initial_delay": 1.0,
"max_delay": 30.0,
"exponential_base": 2.0,
"jitter": true
}
}
```

#### Server Priority

When multiple servers offer similar tools, use priority to control which server is preferred:

```json
{
"servers": [
{
"name": "primary-calc",
"transport": "stdio",
"command": ["python", "calc_server.py"],
"priority": 1
},
{
"name": "backup-calc",
"transport": "http",
"url": "http://backup.example.com/mcp",
"priority": 2
}
]
}
```

### Example MCP Servers

The project includes example MCP servers in `examples/mcp-servers/`:

- **filesystem_server.py**: File operations (list, read, write)
- **weather_server.py**: Weather data and forecasts

See [examples/README.md](examples/README.md) for detailed setup instructions.

## Project Structure

```text
vertex-mcp-chatbot/
├── main.py # Entry point
├── pyproject.toml # Python project configuration and dependencies
├── pytest.ini # Pytest configuration
├── scripts/
│ ├── run_tests.py # Custom test runner script
│ └── run_example_tests.py # Example server test runner
├── .env.example # Example environment file
├── .gitignore # Git ignore rules
├── README.md # This file
├── mcp_config.json.example # Example MCP server configuration
├── docs/
│ ├── claude-agent.md # Claude Agent SDK + Vertex AI walkthrough
│ └── mcp-guide.md # Comprehensive MCP user guide
├── src/
│ ├── __init__.py # Package init
│ ├── claude_agent_chatbot.py # Claude Agent REPL (default CLI)
│ ├── claude_agent_client.py # Claude SDK helper / session manager
│ ├── claude_sdk_fallback.py # Local stub used in tests when SDK is unavailable
│ ├── config.py # Configuration management for Claude + Gemini helpers
│ ├── gemini_client.py # Legacy Gemini/Vertex AI client wrapper (still used in tests)
│ ├── chatbot.py # Legacy Gemini chatbot implementation
│ ├── mcp_config.py # MCP configuration handling
│ └── mcp_manager.py # MCP client management
└── tests/
├── __init__.py # Test package init
├── conftest.py # Pytest fixtures and configuration
├── test_config.py # Configuration tests
├── test_gemini_client.py # Gemini client tests
├── test_chatbot.py # Legacy Gemini chatbot functionality tests
├── test_claude_agent_chatbot.py # Claude REPL behaviour
├── test_claude_agent_client.py # Claude client helper tests
├── test_main.py # Main entry point tests
├── test_integration.py # Integration tests
├── test_mcp_config.py # MCP configuration tests
├── test_mcp_manager.py # MCP manager tests
├── test_mcp_http_transport.py # HTTP/SSE transport tests
├── test_mcp_multi_server.py # Multi-server coordination tests
├── test_mcp_oauth.py # OAuth authentication tests
└── test_mcp_retry.py # Connection retry tests
```

## Configuration

The application uses the following configuration (can be modified in `src/config.py`):

- **Project ID**: `your-gcp-project-id` (override in `.env` via `GOOGLE_CLOUD_PROJECT`)
- **Location**: `your-gcp-location` (override in `.env` via `GOOGLE_CLOUD_LOCATION`)
- **Default Claude Model**: `claude-4.5-sonnet` (change via `CLAUDE_MODEL`)
- **Anthropic API Version**: `2025-02-19` (set `CLAUDE_API_VERSION` to override)
- **Max History Length**: 10 conversation turns

## Troubleshooting

### "Failed to start Claude Agent REPL"

Make sure you've:

1. Authenticated with Google Cloud: `gcloud auth application-default login`
2. Enabled the Vertex AI API and Anthropic publisher access for your project/region
3. Granted the `Vertex AI User` role to the identity running the CLI
4. Installed dependencies (`uv sync` or `pip install -e .`) so `google-auth` is available

### "Unable to refresh Google credentials"

Check that:

1. ADC credentials are active (`gcloud auth application-default login` or service account JSON)
2. The `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values match your deployment
3. The executing user/service account has billing enabled for the project
4. You are targeting a region that exposes Claude through Vertex AI

### Falling back to the public Anthropic API

If Vertex access is unavailable you can still run the REPL by setting `CLAUDE_VERTEX_ENABLED=false` and exporting `ANTHROPIC_API_KEY`. The helper automatically reconfigures the Claude SDK to use the public endpoint and keeps MCP tooling enabled.

## Testing

This project includes a comprehensive test suite with 190+ tests covering all functionality, including example MCP servers.

### Running Tests

**Quick test run:**

```bash
# Install dev dependencies (includes pytest-cov for coverage)
make install-dev

# Run all tests
make test
```

**Testing commands:**

```bash
make test # Run all tests with pytest
make test-v # Run tests with verbose output
make test-cov # Run tests with coverage report
make test-unit # Run only unit tests
make test-int # Run only integration tests
```

**Example MCP Server Tests:**

```bash
make test-examples # Run all example server tests
make test-examples-v # Run with verbose output
make test-examples-cov # Run with coverage
make test-filesystem # Run only filesystem server tests
make test-weather # Run only weather server tests
make server-check # Check server health
```

**Alternative (direct uv commands):**

```bash
# Run all tests
uv run pytest tests/ -v

# Using the custom test runner
uv run python scripts/run_tests.py --verbose
uv run python scripts/run_tests.py --coverage
uv run python scripts/run_tests.py --unit

# Example server tests
uv run python scripts/run_example_tests.py --filesystem
uv run python scripts/run_example_tests.py --weather
```

### Test Categories

**Unit Tests:**

- `test_config.py` - Configuration management (6 tests)
- `test_claude_agent_client.py` - Claude Agent client helper (6 tests)
- `test_claude_agent_chatbot.py` - Claude REPL commands and history (7 tests)
- `test_gemini_client.py` - Gemini API client functionality (11 tests)
- `test_chatbot.py` - Interactive chatbot features (23 tests)
- `test_main.py` - Main entry point and CLI (8 tests)

**MCP Framework Tests:**

- `test_mcp_manager.py` - MCP client management (25+ tests)
- `test_mcp_config.py` - MCP configuration handling (15+ tests)
- `test_mcp_http_transport.py` - HTTP/SSE transport tests (20+ tests)
- `test_mcp_multi_server.py` - Multi-server coordination (15+ tests)
- `test_mcp_oauth.py` - OAuth authentication (20+ tests)
- `test_mcp_retry.py` - Connection retry logic (10+ tests)

**Example Server Tests:**

- `test_filesystem_server.py` - Filesystem MCP server (44 tests)
- `test_weather_server.py` - Weather MCP server (39 tests)

**Integration Tests:**

- `test_integration.py` - Full system integration scenarios

### Test Coverage

The test suite covers:

- ✅ **Configuration**: Environment variables, defaults, static methods
- ✅ **Claude Agent**: Session lifecycle management, MCP registration, command handling
- ✅ **Chatbot UI**: Commands, history, display formatting, input validation
- ✅ **CLI Interface**: Argument parsing, exception handling, lifecycle management
- ✅ **Integration**: End-to-end workflows, component interactions
- ✅ **Error Handling**: Network failures, API errors, user interrupts

### Test Features

- **Comprehensive mocking** - No external API calls during testing
- **No hanging tests** - Properly handles infinite loops and user input
- **Fixtures and utilities** - Reusable test components in `conftest.py`
- **Multiple test runners** - Standard pytest and custom runner with options
- **CI/CD ready** - Configured for automated testing pipelines

## Documentation

- 📚 **[Documentation Index](docs/README.md)** - Complete documentation overview
- 🤖 **[Claude Agent Guide](docs/claude-agent.md)** - Configure the Claude Agent SDK on Vertex AI
- 📖 **[MCP User Guide](docs/mcp-guide.md)** - Comprehensive guide to using MCP features
- ⚙️ **[MCP Configuration Reference](docs/mcp-config-reference.md)** - Detailed configuration options
- 🔧 **[MCP API Reference](docs/mcp-api.md)** - Technical API documentation
- 🔍 **[MCP Troubleshooting](docs/mcp-troubleshooting.md)** - Solutions to common problems
- 🚀 **[Example MCP Servers](examples/README.md)** - Ready-to-use example servers
- 🏗️ **[Implementation Details](plans/implement-mcp-client.md)** - Technical implementation notes

## Development

To extend or modify the chatbot:

### Architecture

1. **`ClaudeAgentClient`** (`src/claude_agent_client.py`) - Creates Claude agents/sessions and sends messages
2. **`ClaudeAgentChatbot`** (`src/claude_agent_chatbot.py`) - Terminal UI that wraps the Claude Agent SDK
3. **`Config`** (`src/config.py`) - Centralised configuration management for Claude and Gemini helpers
4. **`main.py`** - Entry point and CLI argument handling
5. **Legacy Gemini modules** (`src/gemini_client.py`, `src/chatbot.py`) - Retained for backwards compatibility and tests

### Adding New Features

1. **New Commands**: Extend `ClaudeAgentChatbot.handle_command` for CLI additions
2. **Model Parameters**: Modify settings in the `Config` class
3. **API Features**: Add helpers to `ClaudeAgentClient` (or the fallback stub) for advanced SDK usage
4. **UI Enhancements**: Update rendering helpers in `ClaudeAgentChatbot`

### Development Workflow

```bash
# Set up development environment
make dev-setup # Installs deps + pre-commit hooks

# Run tests during development
make test # Run all tests
make test-cov # Run with coverage report

# Code quality
make format # Format code with black and isort
make lint # Run flake8 linting
make pre-commit-run # Run all pre-commit hooks

# Dependency management
make add PKG=requests # Add a new dependency
make add-dev PKG=pytest # Add a dev dependency
make sync # Sync dependencies from pyproject.toml

# Clean up
make clean # Remove Python cache files
make clean-all # Remove cache + virtual environment
```

**Alternative (direct uv commands):**

```bash
uv sync --extra dev
uv run pre-commit install
uv run pytest tests/ -v --tb=short
uv run black src/ tests/
uv run isort src/ tests/
uv run flake8 src/ tests/
```

### Writing Tests

When adding new functionality:

1. Add unit tests for individual methods/functions
2. Add integration tests for feature workflows
3. Use the fixtures in `tests/conftest.py` for common mocking
4. Follow the existing test patterns and naming conventions
5. Ensure tests don't make external API calls

## Security Notes

- Never commit your `.env` file or service account credentials
- The `.gitignore` file is configured to exclude sensitive files
- Store your service account JSON securely
- Consider using Google Cloud Secret Manager for production deployments

## License

MIT License. See [LICENSE](LICENSE) for details.