{"id":30176348,"url":"https://github.com/braindead-dev/garry-tan","last_synced_at":"2025-08-12T02:45:35.838Z","repository":{"id":305078797,"uuid":"1021858364","full_name":"braindead-dev/Garry-Tan","owner":"braindead-dev","description":null,"archived":false,"fork":false,"pushed_at":"2025-08-03T23:44:44.000Z","size":114,"stargazers_count":0,"open_issues_count":2,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-08-04T01:34:38.811Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/braindead-dev.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-07-18T04:13:14.000Z","updated_at":"2025-08-03T23:44:47.000Z","dependencies_parsed_at":"2025-07-18T15:21:55.026Z","dependency_job_id":"12d36a5e-6073-468b-8864-16c6d80a4514","html_url":"https://github.com/braindead-dev/Garry-Tan","commit_stats":null,"previous_names":["braindead-dev/gary-tan","braindead-dev/garry-tan"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/braindead-dev/Garry-Tan","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/braindead-dev%2FGarry-Tan","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/braindead-dev%2FGarry-Tan/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/braindead-dev%2FGarry-Tan/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/braindead-dev%2FGarry-Tan/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/braindead-dev","download_url":"https://codeload.github.com/braindead-dev/Garry-Tan/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/braindead-dev%2FGarry-Tan/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":269991519,"owners_count":24509009,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-08-12T02:00:09.011Z","response_time":80,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-08-12T02:45:31.832Z","updated_at":"2025-08-12T02:45:35.802Z","avatar_url":"https://github.com/braindead-dev.png","language":"TypeScript","readme":"# Garry Tan Discord Bot\n\nGarry Tan (CEO of Y Combinator)'s soul entrapped as a Discord bot. Intelligently engages in conversation when he wants to, without needing to be mentioned. Just like the real thing. This is really moreso a framework for all replication agents.\n\n## 📋 Prerequisites\n\n- Node.js (v18 or higher)\n- Discord Bot Token\n- API Key to some LLM completions API (OpenAI, Groq, etc.)\n\n## 🛠️ Installation\n\n1. **Clone the repository**\n   ```bash\n   git clone https://github.com/braindead-dev/Garry-Tan\n   cd Garry-Tan\n   ```\n\n2. **Install dependencies**\n   ```bash\n   npm install\n   ```\n\n3. **Set up environment variables**\n   Create a `.env` file in the root directory:\n   ```env\n   DISCORD_TOKEN=your_discord_bot_token_here\n   \n   # AI Service API Keys (add the ones you plan to use)\n   GROQ_API_KEY=your_groq_api_key_here\n   OPENAI_API_KEY=your_openai_api_key_here\n   XAI_API_KEY=your_xai_api_key_here\n   GEMINI_API_KEY=your_gemini_api_key_here\n   ```\n\n4. **Configure the bot**\n   Edit `src/core/config.ts` to customize the bot's behavior:\n   ```typescript\n   // Main AI service configuration\n   const MAIN_SERVICE_CONFIG = {\n     service: 'groq' as const,           // groq | openai | xai | gemini\n     model: 'gemma2-9b-it'\n   };\n   \n   // Confidence check AI service configuration  \n   const CONFIDENCE_SERVICE_CONFIG = {\n     service: 'openai' as const,         // groq | openai | xai | gemini\n     model: 'gpt-4o-mini',\n     threshold: 0.7,                     // How confident to be before responding\n     messageHistoryLimit: 5\n   };\n   ```\n\n5. **Build the project**\n   ```bash\n   npm run build\n   ```\n\n6. **Start the bot**\n   ```bash\n   npm start\n   ```\n\n## 🔧 Configuration\n\n### Service Providers\n\nThe bot supports multiple AI service providers with automatic endpoint and API key selection. Configure your preferred services in `src/core/config.ts`:\n\n| Service | Endpoint | Environment Variable |\n|---------|----------|---------------------|\n| **Groq** | `https://api.groq.com/openai/v1/chat/completions` | `GROQ_API_KEY` |\n| **OpenAI** | `https://api.openai.com/v1/chat/completions` | `OPENAI_API_KEY` |\n| **xAI** | `https://api.x.ai/v1/chat/completions` | `XAI_API_KEY` |\n| **Gemini** | `https://generativelanguage.googleapis.com/v1beta/openai/chat/completions` | `GEMINI_API_KEY` |\n\n### Dual AI System\n\nThe bot uses two separate AI services:\n- **Main Service**: Generates responses and handles tool calls (configured in `MAIN_SERVICE_CONFIG`)\n- **Confidence Service**: Determines when to respond (configured in `CONFIDENCE_SERVICE_CONFIG`)\n\nThis allows you to use a fast, cost-effective model for confidence checks while using a more capable model for responses.\n\n### Easy Configuration\n\nAll main settings are clearly organized at the top of `src/core/config.ts`:\n\n```typescript\n// =============================================================================\n// MAIN CONFIGURATION - Edit these values to customize the bot\n// =============================================================================\n\nconst PERSONALITY = {\n  name: 'Garry Tan',\n  description: '...',\n  communicationStyle: '...'\n};\n\nconst MAIN_SERVICE_CONFIG = {\n  service: 'groq' as const,           // Choose your service\n  model: 'gemma2-9b-it'               // Choose your model\n};\n\nconst BOT_SETTINGS = {\n  messageHistoryLimit: 10,            // How many messages to consider\n  splitMessages: true,                // Split long responses\n  messageSplitDelay: 200             // Delay between message parts\n};\n```\n\n### Personality Settings\n\nEdit `src/core/config.ts` to customize the bot's personality:\n\n```typescript\nconst PERSONALITY = {\n  name: 'Garry Tan',\n  description: 'Canadian-American venture capitalist, executive, CEO of Y Combinator...',\n  communicationStyle: 'Concise, thoughtful, pragmatic, approachable and friendly. Uses decent grammar and capitalization in his messages.'\n};\n```\n\n### Response Confidence\n\nThe bot uses a confidence scoring system (0-1) to determine when to respond. Adjust the threshold in `AGENT_CONFIG.confidenceCheck.threshold` (default: 0.7).\n\n### Message History\n\nConfigure how many previous messages the bot considers:\n```typescript\nmessageHistoryLimit: 10  // Used for response generation\n```\n\nThe confidence check system has its own separate message history limit:\n```typescript\nconfidenceCheck: {\n  messageHistoryLimit: 5,  // Separate limit for confidence evaluation\n  // ... other config\n}\n```\n\nThis allows the confidence check to consider fewer messages (for faster evaluation) while the response generation can use more context.\n\n## 🤖 How It Works\n\n### Architecture Overview\n\n(using some models that worked for me)\n```\nDiscord Message → Confidence Check → Message Processing → Response Generation\n                       ↓                    ↓                    ↓\n                   AI Analysis        Format History        LLM + Tools\n                  (gpt-4o-mini)       (Last 10 msgs)       (gemma2-9b-it)\n```\n\n### Message Processing Flow\n\n1. **Message Reception**: Bot receives all messages but filters out its own\n2. **Auto-trigger Check**: If bot is mentioned or replied to, skip confidence check and respond\n3. **Confidence Analysis**: AI determines if response is appropriate (0-1 score) and checks if it passes the 0.7 threshold\n4. **Context Gathering**: Fetches and formats last 10 messages from channel\n5. **Initial Response Generation**: Uses personality config and context to generate response or tool calls\n6. **Tool Execution**: If tools are requested, executes them and incorporates results\n7. **Final Response**: Generates final response incorporating tool results\n8. **Message Delivery**: Sends response, potentially split across multiple messages\n\n## 🛠️ Tool System\n\nThe bot features a modular tool system that allows it to perform actions beyond just text responses.\n\n### Available Tools\n\n#### GIF Search (`search_gif`)\n- **Purpose**: Search for and send animated GIFs\n- **Usage**: Bot automatically decides when a GIF would enhance the conversation\n- **Example**: \n  ```\n  User: \"Just closed our Series A!\"\n  Garry Tan: \"Congratulations! 🎉\" [sends celebration GIF]\n  ```\n\n### Tool Architecture\n\nThe tool system is built with modularity in mind:\n\n```typescript\n// Tool Registry (src/core/tool-handler.ts)\nconst toolRegistry: Record\u003cstring, Tool\u003e = {\n  search_gif: {\n    definition: gifSearchTool.function,  // OpenAI function schema\n    execute: (client, message, args) =\u003e searchGif(client, message, args.query),\n  },\n  // Add more tools here...\n};\n```\n\n### Adding New Tools\n\n1. **Create the tool implementation** in `src/tools/your-tool.js`:\n   ```typescript\n   export const yourToolDefinition = {\n     name: 'your_tool',\n     description: 'What your tool does',\n     parameters: {\n       // OpenAI function schema\n     }\n   };\n   \n   export async function executeYourTool(client, message, args) {\n     // Tool implementation\n     return result;\n   }\n   ```\n\n2. **Register the tool** in `src/core/tool-handler.ts`:\n   ```typescript\n   import { executeYourTool, yourToolDefinition } from '../tools/your-tool.js';\n   \n   const toolRegistry: Record\u003cstring, Tool\u003e = {\n     // ... existing tools\n     your_tool: {\n       definition: yourToolDefinition,\n       execute: executeYourTool,\n     },\n   };\n   ```\n\n### Tool Execution Flow\n\n1. **LLM Decision**: The model decides whether to use tools based on conversation context\n2. **Tool Call**: Model returns structured tool calls with parameters\n3. **Execution**: `handleToolCalls` function executes each tool with error handling\n4. **Result Integration**: Tool results are added to conversation context\n5. **Final Response**: Model generates final response incorporating tool results\n\n### Error Handling\n\nThe tool system includes robust error handling:\n- **Parse Errors**: Invalid JSON arguments are caught and reported\n- **Execution Errors**: Tool failures are gracefully handled with error messages\n- **Unknown Tools**: Calls to non-existent tools are handled safely\n- **Conversation Continuity**: Errors don't break the conversation flow\n\n## 💬 Conversation Examples\n\n### Intelligent Mention Handling\n\nThe bot properly handles Discord mentions:\n- `\u003c@123456789\u003e` → `[Username \u003c@123456789\u003e]`\n- Bot mentions → `[\u003c@me\u003e]`\n- Role mentions → `[RoleName \u003c@\u0026123456789\u003e]`\n- Channel mentions → `[#channel-name \u003c#123456789\u003e]`\n\n### Direct Mentions (Auto-trigger)\n```\nUser: \"Hey @Garry Tan, thoughts on this startup idea?\"\nGarry Tan: \"I'd love to hear more about the problem you're solving...\"\n```\n\n### Replies (Auto-trigger)\n```\nUser: \"Is anyone in this channel knowledgeable about raising?\"\nGarry Tan: \"What stage are you at?\"\nUser: \"We're pre-seed, just built our MVP\"\nGarry Tan: \"That's exciting! How's user feedback been so far?\"\n```\n\n### Tool Usage\n```\nUser: \"We just got accepted into YC!\"\nGarry Tan: \"Incredible news! Welcome to the family! 🚀\" [sends YC celebration GIF]\n```\n\n## 🔧 Development\n\n### Project Structure\n\n```\nsrc/\n├── core/\n│   ├── agent.ts          # Main agent logic and conversation flow\n│   ├── config.ts         # Configuration and personality settings\n│   └── tool-handler.ts   # Centralized tool management system\n├── tools/\n│   └── gif-search.ts     # GIF search tool implementation\n└── index.ts              # Discord bot initialization\n```\n\n### Key Design Principles\n\n- **Modularity**: Tools are self-contained and easy to add/remove\n- **Error Resilience**: Robust error handling prevents conversation breakage\n- **Context Awareness**: Proper conversation history management for tool calls\n- **Separation of Concerns**: Agent logic, tool execution, and configuration are separate\n\n#### Contributions always open!! Create a PR\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbraindead-dev%2Fgarry-tan","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbraindead-dev%2Fgarry-tan","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbraindead-dev%2Fgarry-tan/lists"}