{"id":30177586,"url":"https://github.com/ebowwa/ai-proxy-core","last_synced_at":"2025-08-12T05:00:26.673Z","repository":{"id":307595172,"uuid":"1030065528","full_name":"ebowwa/ai-proxy-core","owner":"ebowwa","description":"Minimal, stateless AI service proxy for Gemini and other LLMs","archived":false,"fork":false,"pushed_at":"2025-08-08T19:39:17.000Z","size":671,"stargazers_count":0,"open_issues_count":12,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-08-08T19:41:23.652Z","etag":null,"topics":["ai","api-proxy","async","gemini","library","llm","multimodal","ollama","openai","opentelemetry","python","websocket"],"latest_commit_sha":null,"homepage":"https://pypi.org/project/ai-proxy-core/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ebowwa.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-08-01T03:20:47.000Z","updated_at":"2025-08-08T19:39:20.000Z","dependencies_parsed_at":"2025-08-01T04:47:12.751Z","dependency_job_id":"417d84ce-e414-4146-86f7-4c442c52378e","html_url":"https://github.com/ebowwa/ai-proxy-core","commit_stats":null,"previous_names":["ebowwa/ai-proxy-core"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/ebowwa/ai-proxy-core","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ebowwa%2Fai-proxy-core","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ebowwa%2Fai-proxy-core/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ebowwa%2Fai-proxy-core/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ebowwa%2Fai-proxy-core/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ebowwa","download_url":"https://codeload.github.com/ebowwa/ai-proxy-core/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ebowwa%2Fai-proxy-core/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":270005547,"owners_count":24510933,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-08-12T02:00:09.011Z","response_time":80,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","api-proxy","async","gemini","library","llm","multimodal","ollama","openai","opentelemetry","python","websocket"],"created_at":"2025-08-12T05:00:24.602Z","updated_at":"2025-08-12T05:00:26.638Z","avatar_url":"https://github.com/ebowwa.png","language":"Python","readme":"# AI Proxy Core\n\nA unified Python package providing a single interface for AI completions across multiple providers (OpenAI, Gemini, Ollama). Features intelligent model management, automatic provider routing, and zero-config setup.\n\n\u003e 💡 **Why not LangChain?** Read our [philosophy and architectural rationale](https://github.com/ebowwa/ai-proxy-core/issues/13) for choosing simplicity over complexity.\n\n\u003e 🎯 **What's Next?** See our [wrapper layer roadmap](https://github.com/ebowwa/ai-proxy-core/issues/14) for planned features and what belongs in a clean LLM wrapper.\n\n## Installation\n\nBasic (Google Gemini only):\n```bash\npip install ai-proxy-core\n```\n\nWith specific providers (optional dependencies):\n```bash\npip install ai-proxy-core[openai]     # OpenAI support\npip install ai-proxy-core[anthropic]  # Anthropic support (coming soon)\npip install ai-proxy-core[telemetry]  # OpenTelemetry support\npip install ai-proxy-core[all]        # Everything\n```\n\nOr install from source:\n```bash\ngit clone https://github.com/ebowwa/ai-proxy-core.git\ncd ai-proxy-core\npip install -e .\n# With all extras: pip install -e \".[all]\"\n```\n\n## Quick Start\n\n\u003e 🤖 **AI Integration Help**: \n\u003e - **Using the library?** Copy our [user agent prompt](.claude/agents/ai-proxy-core-user.md) to any LLM for instant integration guidance and code examples\n\u003e - **Developing the library?** Use our [developer agent prompt](.claude/agents/ai-proxy-core-developer.md) for architecture details and contribution help\n\n### Unified Interface (Recommended)\n\n```python\nfrom ai_proxy_core import CompletionClient\n\n# Single client for all providers\nclient = CompletionClient()\n\n# Works with any model - auto-detects provider\nresponse = await client.create_completion(\n    messages=[{\"role\": \"user\", \"content\": \"Hello!\"}],\n    model=\"gpt-4\"  # Auto-routes to OpenAI\n)\n\nresponse = await client.create_completion(\n    messages=[{\"role\": \"user\", \"content\": \"Hello!\"}],\n    model=\"gemini-1.5-flash\"  # Auto-routes to Gemini\n)\n\nresponse = await client.create_completion(\n    messages=[{\"role\": \"user\", \"content\": \"Hello!\"}],\n    model=\"llama2\"  # Auto-routes to Ollama\n)\n\n# All return the same standardized format\nprint(response[\"choices\"][0][\"message\"][\"content\"])\n```\n\n### Intelligent Model Selection\n\n```python\n# Find the best model for your needs\nbest_model = await client.find_best_model({\n    \"multimodal\": True,\n    \"min_context_limit\": 32000,\n    \"local_preferred\": False\n})\n\nresponse = await client.create_completion(\n    messages=[{\"role\": \"user\", \"content\": \"Describe this image\"}],\n    model=best_model[\"id\"]\n)\n```\n\n### Model Discovery\n\n```python\n# List all available models across providers\nmodels = await client.list_models()\nfor model in models:\n    print(f\"{model['id']} ({model['provider']}) - {model['context_limit']:,} tokens\")\n\n# List models from specific provider\nopenai_models = await client.list_models(provider=\"openai\")\n```\n\n## Ollama Integration\n\n### Prerequisites\n```bash\n# Install Ollama from https://ollama.ai\n# Start Ollama service\nollama serve\n\n# Pull a model\nollama pull llama3.2\n```\n\n### Using Ollama with CompletionClient\n```python\nfrom ai_proxy_core import CompletionClient, ModelManager\n\n# Option 1: Auto-detection (Ollama will be detected if running)\nclient = CompletionClient()\n\n# Option 2: With custom ModelManager\nmanager = ModelManager()\nclient = CompletionClient(model_manager=manager)\n\n# List Ollama models\nmodels = await client.list_models(provider=\"ollama\")\nprint(f\"Available Ollama models: {[m['id'] for m in models]}\")\n\n# Create completion\nresponse = await client.create_completion(\n    messages=[{\"role\": \"user\", \"content\": \"Hello!\"}],\n    model=\"llama3.2\",\n    provider=\"ollama\",  # Optional, auto-detected from model name\n    temperature=0.7\n)\n```\n\n### Direct Ollama Usage\n```python\nfrom ai_proxy_core import OllamaCompletions\n\nollama = OllamaCompletions()\n\n# List available models\nmodels = ollama.list_models()\nprint(f\"Available models: {models}\")\n\n# Create completion\nresponse = await ollama.create_completion(\n    messages=[{\"role\": \"user\", \"content\": \"Explain quantum computing\"}],\n    model=\"llama3.2\",\n    temperature=0.7,\n    max_tokens=500\n)\n```\n\nSee [examples/ollama_complete_guide.py](examples/ollama_complete_guide.py) for comprehensive examples including error handling, streaming, and advanced features.\n\n## Advanced Usage\n\n### Provider-Specific Completions\n\nIf you need provider-specific features, you can still use the individual clients:\n\n```python\nfrom ai_proxy_core import GoogleCompletions, OpenAICompletions, OllamaCompletions\n\n# Google Gemini with safety settings\ngoogle = GoogleCompletions(api_key=\"your-gemini-api-key\")\nresponse = await google.create_completion(\n    messages=[{\"role\": \"user\", \"content\": \"Hello!\"}],\n    model=\"gemini-1.5-flash\",\n    safety_settings=[{\"category\": \"HARM_CATEGORY_HARASSMENT\", \"threshold\": \"BLOCK_MEDIUM_AND_ABOVE\"}]\n)\n\n# OpenAI with tool calling\nopenai = OpenAICompletions(api_key=\"your-openai-key\")\nresponse = await openai.create_completion(\n    messages=[{\"role\": \"user\", \"content\": \"What's the weather?\"}],\n    model=\"gpt-4\",\n    tools=[{\"type\": \"function\", \"function\": {\"name\": \"get_weather\"}}]\n)\n\n# Ollama for local models\nollama = OllamaCompletions()  # Auto-detects localhost:11434\nresponse = await ollama.create_completion(\n    messages=[{\"role\": \"user\", \"content\": \"Hello!\"}],\n    model=\"llama3.2\",\n    temperature=0.7\n)\n```\n\n### OpenAI-Compatible Endpoints\n\n```python\n# Works with any OpenAI-compatible API (Groq, Anyscale, Together, etc.)\ngroq = OpenAICompletions(\n    api_key=\"your-groq-key\",\n    base_url=\"https://api.groq.com/openai/v1\"\n)\n\nresponse = await groq.create_completion(\n    messages=[{\"role\": \"user\", \"content\": \"Hello!\"}],\n    model=\"mixtral-8x7b-32768\"\n)\n```\n\n### Gemini Live Session\n\n```python\nfrom ai_proxy_core import GeminiLiveSession\n\n# Example 1: Basic session (no system prompt)\nsession = GeminiLiveSession(api_key=\"your-gemini-api-key\")\n\n# Example 2: Session with system prompt (simple string format)\nsession = GeminiLiveSession(\n    api_key=\"your-gemini-api-key\",\n    system_instruction=\"You are a helpful voice assistant. Be concise and friendly.\"\n)\n\n# Example 3: Session with built-in tools enabled\nsession = GeminiLiveSession(\n    api_key=\"your-gemini-api-key\",\n    enable_code_execution=True,      # Enable Python code execution\n    enable_google_search=True,       # Enable web search\n    system_instruction=\"You are a helpful assistant with access to code execution and web search.\"\n)\n\n# Example 4: Session with custom function declarations\nfrom google.genai import types\n\ndef get_weather(location: str) -\u003e dict:\n    # Your custom function implementation\n    return {\"location\": location, \"temp\": 72, \"condition\": \"sunny\"}\n\nweather_function = types.FunctionDeclaration(\n    name=\"get_weather\",\n    description=\"Get current weather for a location\",\n    parameters=types.Schema(\n        type=\"OBJECT\",\n        properties={\n            \"location\": types.Schema(type=\"STRING\", description=\"City name\")\n        },\n        required=[\"location\"]\n    )\n)\n\nsession = GeminiLiveSession(\n    api_key=\"your-gemini-api-key\",\n    custom_tools=[types.Tool(function_declarations=[weather_function])],\n    system_instruction=\"You can help with weather information.\"\n)\n\n# Set up callbacks\nsession.on_audio = lambda data: print(f\"Received audio: {len(data)} bytes\")\nsession.on_text = lambda text: print(f\"Received text: {text}\")\nsession.on_function_call = lambda call: handle_function_call(call)\n\nasync def handle_function_call(call):\n    if call[\"name\"] == \"get_weather\":\n        result = get_weather(**call[\"args\"])\n        await session.send_function_result(result)\n\n# Start session\nawait session.start()\n\n# Send audio/text\nawait session.send_audio(audio_data)\nawait session.send_text(\"What's the weather in Boston?\")\n\n# Stop when done\nawait session.stop()\n```\n\n### Integration with FastAPI\n\n#### Chat Completions API\n```python\nfrom fastapi import FastAPI, HTTPException\nfrom pydantic import BaseModel\nfrom ai_proxy_core import CompletionClient\n\napp = FastAPI()\nclient = CompletionClient()\n\nclass CompletionRequest(BaseModel):\n    messages: list\n    model: str = \"gemini-1.5-flash\"\n    temperature: float = 0.7\n\n@app.post(\"/api/chat/completions\")\nasync def create_completion(request: CompletionRequest):\n    try:\n        response = await client.create_completion(\n            messages=request.messages,\n            model=request.model,\n            temperature=request.temperature\n        )\n        return response\n    except Exception as e:\n        raise HTTPException(status_code=500, detail=str(e))\n```\n\n#### WebSocket for Gemini Live (Fixed in v0.3.3)\n\n```python\nfrom fastapi import FastAPI, WebSocket, WebSocketDisconnect\nfrom google import genai\nfrom google.genai import types\nimport asyncio\n\napp = FastAPI()\n\n@app.websocket(\"/api/gemini/ws\")\nasync def gemini_websocket(websocket: WebSocket):\n    await websocket.accept()\n    \n    # Create Gemini client\n    client = genai.Client(\n        http_options={\"api_version\": \"v1beta\"},\n        api_key=\"your-gemini-api-key\"\n    )\n    \n    # Configure for text (audio requires PCM format)\n    config = types.LiveConnectConfig(\n        response_modalities=[\"TEXT\"],\n        generation_config=types.GenerationConfig(\n            temperature=0.7,\n            max_output_tokens=1000\n        )\n    )\n    \n    # Connect using async context manager\n    async with client.aio.live.connect(\n        model=\"gemini-2.0-flash-exp\",\n        config=config\n    ) as session:\n        \n        # Handle bidirectional communication\n        async def receive_from_client():\n            async for message in websocket.iter_json():\n                if message[\"type\"] in [\"text\", \"message\"]:\n                    text = message.get(\"data\", {}).get(\"text\", \"\")\n                    if text:\n                        await session.send(input=text, end_of_turn=True)\n        \n        async def receive_from_gemini():\n            while True:\n                turn = session.receive()\n                async for response in turn:\n                    if hasattr(response, 'server_content'):\n                        content = response.server_content\n                        if hasattr(content, 'model_turn'):\n                            for part in content.model_turn.parts:\n                                if hasattr(part, 'text') and part.text:\n                                    await websocket.send_json({\n                                        \"type\": \"response\",\n                                        \"text\": part.text\n                                    })\n        \n        # Run both tasks concurrently\n        task1 = asyncio.create_task(receive_from_client())\n        task2 = asyncio.create_task(receive_from_gemini())\n        \n        # Wait for either to complete\n        done, pending = await asyncio.wait(\n            [task1, task2],\n            return_when=asyncio.FIRST_COMPLETED\n        )\n        \n        # Clean up\n        for task in pending:\n            task.cancel()\n```\n\n**Try the HTML Demo:**\n```bash\n# Start the FastAPI server\nuv run main.py\n\n# Open the HTML demo in your browser\nopen examples/gemini_live_demo.html\n```\n\nThe demo provides a full-featured chat interface with WebSocket connection to Gemini Live. Note: Audio input requires PCM format conversion (not yet implemented).\n\n## Features\n\n### 🚀 **Unified Interface**\n- **Single client for all providers** - No more provider-specific code\n- **Automatic provider routing** - Detects provider from model name\n- **Intelligent model selection** - Find best model based on requirements\n- **Zero-config setup** - Auto-detects available providers from environment\n\n### 🧠 **Model Management**\n- **Cross-provider model discovery** - List models from OpenAI, Gemini, Ollama\n- **Rich model metadata** - Context limits, capabilities, multimodal support\n- **Automatic model provisioning** - Downloads Ollama models as needed\n- **Model compatibility checking** - Ensures models support requested features\n\n### 🔧 **Developer Experience**\n- **No framework dependencies** - Use with FastAPI, Flask, or any Python app\n- **Async/await support** - Modern async Python\n- **Type hints** - Full type annotations\n- **Easy testing** - Mock the unified client in your tests\n- **Backward compatible** - All existing provider-specific code continues to work\n\n### 🎯 **Advanced Features**\n- **WebSocket support** - Real-time audio/text streaming with Gemini Live\n- **Built-in tools** - Code execution and Google Search with simple flags\n- **Custom functions** - Add your own function declarations\n- **Optional telemetry** - OpenTelemetry integration for production monitoring\n- **Provider-specific optimizations** - Access advanced features when needed\n\n### Telemetry\n\nBasic observability with OpenTelemetry (optional):\n\n```python\n# Install with: pip install \"ai-proxy-core[telemetry]\"\n\n# Enable telemetry via environment variables\nexport OTEL_ENABLED=true\nexport OTEL_EXPORTER_TYPE=console  # or \"otlp\" for production\nexport OTEL_ENDPOINT=localhost:4317  # for OTLP exporter\n\n# Automatic telemetry for:\n# - Request counts by model/status\n# - Request latency tracking\n# - Session duration for WebSockets\n# - Error tracking with types\n```\n\nThe telemetry is completely optional and has zero overhead when disabled.\n\n## Project Structure\n\n\u003e 📝 **Note:** Full documentation of the project structure is being tracked in [Issue #12](https://github.com/ebowwa/ai-proxy-core/issues/12)\n\nThis project serves dual purposes:\n- **Python Library** (`/ai_proxy_core`): Installable via pip for use in Python applications\n- **Web Service** (`/api`): FastAPI endpoints for REST API access\n\n## Development\n\n### Releasing New Versions\n\nWe provide an automated release script that handles version bumping, building, and publishing:\n\n```bash\n# Make the script executable (first time only)\nchmod +x release.sh\n\n# Release a new version\n./release.sh 0.1.9\n```\n\nThe script will:\n1. Show current version and validate the new version format\n2. Prompt for a release description (for CHANGELOG)\n3. Update version in all necessary files (pyproject.toml, setup.py, __init__.py)\n4. Update CHANGELOG.md with your description\n5. Build the package\n6. Upload to PyPI\n7. Commit changes and create a git tag\n8. Push to GitHub with the new tag\n\n### Manual Build Process\n\nIf you prefer to build manually:\n\n```bash\nuv run python setup.py sdist bdist_wheel\ntwine upload dist/*\n```\n\n## License\n\nMIT","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Febowwa%2Fai-proxy-core","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Febowwa%2Fai-proxy-core","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Febowwa%2Fai-proxy-core/lists"}