{"id":31839574,"url":"https://github.com/graphlit/graphlit-client-typescript","last_synced_at":"2026-03-09T10:31:32.039Z","repository":{"id":228907684,"uuid":"775242664","full_name":"graphlit/graphlit-client-typescript","owner":"graphlit","description":"TypeScript client for Graphlit Platform","archived":false,"fork":false,"pushed_at":"2026-03-05T03:43:44.000Z","size":4833,"stargazers_count":6,"open_issues_count":0,"forks_count":2,"subscribers_count":1,"default_branch":"main","last_synced_at":"2026-03-05T08:49:02.376Z","etag":null,"topics":["api-client","api-client-typescript","chatbot","copilot","document-parser","graphlit","pdf-to-json","rag"],"latest_commit_sha":null,"homepage":"https://www.graphlit.com","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/graphlit.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2024-03-21T02:37:48.000Z","updated_at":"2026-03-05T03:43:48.000Z","dependencies_parsed_at":"2024-04-24T06:47:02.408Z","dependency_job_id":"01771255-5f9c-41a3-b041-c9f4859ead8d","html_url":"https://github.com/graphlit/graphlit-client-typescript","commit_stats":null,"previous_names":["graphlit/graphlit-client-typescript"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/graphlit/graphlit-client-typescript","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/graphlit%2Fgraphlit-client-typescript","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/graphlit%2Fgraphlit-client-typescript/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/graphlit%2Fgraphlit-client-typescript/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/graphlit%2Fgraphlit-client-typescript/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/graphlit","download_url":"https://codeload.github.com/graphlit/graphlit-client-typescript/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/graphlit%2Fgraphlit-client-typescript/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30291807,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-09T02:57:19.223Z","status":"ssl_error","status_checked_at":"2026-03-09T02:56:26.373Z","response_time":61,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["api-client","api-client-typescript","chatbot","copilot","document-parser","graphlit","pdf-to-json","rag"],"created_at":"2025-10-12T04:01:53.405Z","updated_at":"2026-03-09T10:31:32.028Z","avatar_url":"https://github.com/graphlit.png","language":"TypeScript","funding_links":[],"categories":[],"sub_categories":[],"readme":"# Graphlit TypeScript Client SDK\n\n[![npm version](https://badge.fury.io/js/graphlit-client.svg)](https://badge.fury.io/js/graphlit-client)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\nThe official TypeScript/JavaScript SDK for the [Graphlit Platform](https://www.graphlit.com) - build AI-powered applications with knowledge retrieval in minutes.\n\n## 🚀 What is Graphlit?\n\nGraphlit is a cloud platform that handles the complex parts of building AI applications:\n\n- **Ingest any content** - PDFs, websites, audio, video, and more\n- **Chat with your data** - Using RAG (Retrieval-Augmented Generation)\n- **Extract insights** - Summaries, entities, and metadata\n- **Build knowledge graphs** - Automatically connect related information\n\n## ✨ What's New\n\n### v1.3.0 - Google SDK Migration 🔄\n\n- **BREAKING CHANGE**: Migrated from deprecated `@google/generative-ai` to new `@google/genai` SDK\n- **Improved Thinking Support** - Better detection of Google Gemini thinking/reasoning with proper `part.thought` API\n- **Enhanced Streaming** - More reliable streaming with the new Google SDK\n- **Migration Required** - See [Migration Guide](#migrating-from-google-generative-ai) below\n\n### v1.2.0 - Reasoning \u0026 Cancellation Support 🧠\n\n- **Reasoning/Thinking Detection** - See how AI models think through problems (Bedrock Nova, Deepseek, Anthropic)\n- **Stream Cancellation** - Stop long-running generations instantly with AbortSignal support\n- **Enhanced Streaming Events** - New `reasoning_update` events expose model thought processes\n\n### v1.1.0 - Streaming \u0026 Resilience\n\n- **Real-time streaming** - Watch AI responses appear word-by-word across 9 different providers\n- **Tool calling** - Let AI execute functions and retrieve data\n- **Extended provider support** - Native streaming integration with OpenAI, Anthropic, Google, Groq, Cerebras, Cohere, Mistral, AWS Bedrock, and Deepseek\n- **Better performance** - Optimized streaming with provider-specific SDKs\n- **Network resilience** - Automatic retry logic for transient failures\n\n## 📋 Table of Contents\n\n- [Quick Start](#quick-start)\n- [Installation](#installation)\n- [Setting Up](#setting-up)\n- [Migrating from @google/generative-ai](#migrating-from-google-generative-ai) 🔄\n- [Reasoning Support (New!)](#reasoning-support-new) 🧠\n- [Stream Cancellation (New!)](#stream-cancellation-new) 🛑\n- [Network Resilience](#network-resilience)\n- [Streaming Provider Support](#streaming-provider-support)\n- [Basic Examples](#basic-examples)\n- [Common Use Cases](#common-use-cases)\n- [Advanced Agent Features](#advanced-agent-features)\n- [Advanced Workflows](#advanced-workflows)\n- [API Reference](#api-reference)\n- [Testing \u0026 Examples](#testing--examples)\n- [Support](#support)\n\n## Quick Start\n\nGet started in 2 minutes:\n\n```bash\n# Install the SDK\nnpm install graphlit-client\n\n# Set your credentials (get free account at https://portal.graphlit.dev)\nexport GRAPHLIT_ORGANIZATION_ID=your_org_id\nexport GRAPHLIT_ENVIRONMENT_ID=your_env_id\nexport GRAPHLIT_JWT_SECRET=your_secret\n```\n\n```typescript\nimport { Graphlit, Types } from \"graphlit-client\";\n\nconst client = new Graphlit(); // Uses env vars: GRAPHLIT_ORGANIZATION_ID, GRAPHLIT_ENVIRONMENT_ID, GRAPHLIT_JWT_SECRET\n\n// First, create a specification (or use your project default)\nconst spec = await client.createSpecification({\n  name: \"Assistant\",\n  type: Types.SpecificationTypes.Completion,\n  serviceType: Types.ModelServiceTypes.OpenAi,\n  openAI: {\n    model: Types.OpenAiModels.Gpt4O_128K,\n  },\n});\n\n// Start chatting with AI\nawait client.streamAgent(\n  \"Tell me a joke\",\n  (event) =\u003e {\n    if (event.type === \"message_update\") {\n      console.log(event.message.message);\n    }\n  },\n  undefined, // conversationId (optional)\n  { id: spec.createSpecification.id }, // specification\n);\n```\n\n## Installation\n\n```bash\nnpm install graphlit-client\n```\n\n### Want Real-time Streaming?\n\nInstall the LLM SDK for streaming responses:\n\n```bash\n# For OpenAI streaming\nnpm install openai\n\n# For Anthropic streaming\nnpm install @anthropic-ai/sdk\n\n# For Google streaming\nnpm install @google/genai\n\n# For Groq streaming (OpenAI-compatible)\nnpm install groq-sdk\n\n# For Cerebras streaming (OpenAI-compatible)\nnpm install openai\n\n# For Cohere streaming\nnpm install cohere-ai\n\n# For Mistral streaming\nnpm install @mistralai/mistralai\n\n# For AWS Bedrock streaming (Claude models)\nnpm install @aws-sdk/client-bedrock-runtime\n\n# For Deepseek streaming (OpenAI-compatible)\nnpm install openai\n```\n\n## Setting Up\n\nCreate a `.env` file in your project:\n\n```env\nGRAPHLIT_ORGANIZATION_ID=your_org_id\nGRAPHLIT_ENVIRONMENT_ID=your_env_id\nGRAPHLIT_JWT_SECRET=your_secret\n\n# Optional: For streaming with specific providers\nOPENAI_API_KEY=your_key\nANTHROPIC_API_KEY=your_key\nGOOGLE_API_KEY=your_key\n\n# Additional streaming providers\nGROQ_API_KEY=your_key          # For Groq models (Llama, Mixtral)\nCEREBRAS_API_KEY=your_key      # For Cerebras models\nCOHERE_API_KEY=your_key        # For Cohere Command models\nMISTRAL_API_KEY=your_key       # For Mistral models\nDEEPSEEK_API_KEY=your_key      # For Deepseek models\n\n# For AWS Bedrock streaming (requires AWS credentials)\nAWS_REGION=us-east-2\nAWS_ACCESS_KEY_ID=your_key\nAWS_SECRET_ACCESS_KEY=your_secret\n```\n\n## Migrating from @google/generative-ai\n\n### ⚠️ Breaking Change in v1.3.0\n\nThe deprecated `@google/generative-ai` SDK has been replaced with the new `@google/genai` SDK. This change provides better thinking/reasoning support and improved streaming reliability.\n\n### Migration Steps\n\n1. **Update your dependencies:**\n\n```bash\n# Remove old SDK\nnpm uninstall @google/generative-ai\n\n# Install new SDK\nnpm install @google/genai\n```\n\n2. **Update your client initialization:**\n\n```typescript\n// Old (deprecated)\nimport { GoogleGenerativeAI } from \"@google/generative-ai\";\nconst googleClient = new GoogleGenerativeAI(apiKey);\nclient.setGoogleClient(googleClient);\n\n// New\nimport { GoogleGenAI } from \"@google/genai\";\nconst googleClient = new GoogleGenAI({ apiKey });\nclient.setGoogleClient(googleClient);\n```\n\n3. **No other code changes required!**\n   - The Graphlit SDK handles all the API differences internally\n   - Your existing specifications and conversations will continue to work\n   - Thinking/reasoning detection is now more reliable with proper `part.thought` support\n\n### Why the Migration?\n\n- The `@google/generative-ai` SDK is deprecated by Google\n- New SDK provides better support for Gemini 2.x features including thinking mode\n- Improved streaming performance and reliability\n- Proper detection of thought parts without markdown parsing hacks\n\n### Benefits\n\n- **Better Thinking Detection**: Properly detects and separates thinking content using Google's official API\n- **Improved Performance**: More efficient streaming with the new SDK architecture\n- **Future-Proof**: Ensures compatibility with upcoming Gemini models and features\n- **Cleaner API**: Simplified configuration and better TypeScript types\n\n## Reasoning Support (New!) 🧠\n\nThe SDK can detect and expose AI reasoning processes, showing you how models \"think\" through problems. This feature works with models that support reasoning output.\n\n### Quick Example\n\n```typescript\nawait client.streamAgent(\n  \"What's 15% of 240? Think step by step.\",\n  (event) =\u003e {\n    if (event.type === \"reasoning_update\") {\n      console.log(\"🤔 Model thinking:\", event.content);\n    } else if (event.type === \"message_update\") {\n      console.log(\"💬 Answer:\", event.message.message);\n    }\n  },\n  undefined,\n  { id: specificationId },\n);\n```\n\n### Supported Models\n\n| Provider        | Models                       | Format         | Example Output                             |\n| --------------- | ---------------------------- | -------------- | ------------------------------------------ |\n| **AWS Bedrock** | Nova Premier                 | `thinking_tag` | `\u003cthinking\u003eLet me calculate...\u003c/thinking\u003e` |\n| **Deepseek**    | Chat, Reasoner               | `markdown`     | `**Step 1:** First, I need to...`          |\n| **Anthropic**   | Claude (with special access) | `thinking_tag` | Internal thinking blocks                   |\n\n### Using Reasoning Detection\n\n```typescript\n// Create a specification with a reasoning-capable model\nconst spec = await client.createSpecification({\n  name: \"Reasoning Assistant\",\n  serviceType: Types.ModelServiceTypes.Bedrock,\n  bedrock: {\n    model: Types.BedrockModels.NovaPremier,\n    temperature: 0.7,\n  },\n});\n\n// Track reasoning steps\nconst reasoningSteps: string[] = [];\n\nawait client.streamAgent(\n  \"Analyze the pros and cons of remote work. Think carefully.\",\n  (event) =\u003e {\n    switch (event.type) {\n      case \"reasoning_update\":\n        // Capture model's thinking process\n        reasoningSteps.push(event.content);\n        console.log(`🧠 Thinking (${event.format}):`, event.content);\n\n        if (event.isComplete) {\n          console.log(\"✅ Reasoning complete!\");\n        }\n        break;\n\n      case \"message_update\":\n        // The actual answer (reasoning removed)\n        console.log(\"Answer:\", event.message.message);\n        break;\n    }\n  },\n  undefined,\n  { id: spec.createSpecification!.id },\n);\n```\n\n### Key Features\n\n- **Automatic Detection**: Reasoning content is automatically detected and separated\n- **Format Preservation**: Maintains original formatting (markdown, tags, etc.)\n- **Real-time Streaming**: Reasoning streams as it's generated\n- **Clean Separation**: Final answers don't include thinking content\n\n## Stream Cancellation (New!) 🛑\n\nCancel long-running AI generations instantly using the standard Web API `AbortController`.\n\n### Quick Example\n\n```typescript\nconst controller = new AbortController();\n\n// Add a stop button\ndocument.getElementById(\"stop\").onclick = () =\u003e controller.abort();\n\ntry {\n  await client.streamAgent(\n    \"Write a 10,000 word essay about quantum computing...\",\n    (event) =\u003e {\n      if (event.type === \"message_update\") {\n        console.log(event.message.message);\n      }\n    },\n    undefined,\n    { id: specificationId },\n    undefined, // tools\n    undefined, // toolHandlers\n    { abortSignal: controller.signal }, // Pass the signal\n  );\n} catch (error) {\n  if (controller.signal.aborted) {\n    console.log(\"✋ Generation stopped by user\");\n  }\n}\n```\n\n### Advanced Cancellation\n\n```typescript\n// Cancel after timeout\nconst controller = new AbortController();\nsetTimeout(() =\u003e controller.abort(), 30000); // 30 second timeout\n\n// Cancel multiple streams at once\nconst controller = new AbortController();\n\nconst streams = [\n  client.streamAgent(\"Query 1\", handler1, undefined, spec1, null, null, {\n    abortSignal: controller.signal,\n  }),\n  client.streamAgent(\"Query 2\", handler2, undefined, spec2, null, null, {\n    abortSignal: controller.signal,\n  }),\n  client.streamAgent(\"Query 3\", handler3, undefined, spec3, null, null, {\n    abortSignal: controller.signal,\n  }),\n];\n\n// Cancel all streams\ncontroller.abort();\nawait Promise.allSettled(streams);\n```\n\n### Features\n\n- **Instant Response**: Cancellation happens immediately\n- **Provider Support**: Works with all streaming providers\n- **Tool Interruption**: Stops tool execution between rounds\n- **Clean Cleanup**: Resources are properly released\n\n## Network Resilience\n\nThe SDK includes automatic retry logic for network errors and transient failures:\n\n### Default Retry Configuration\n\nBy default, the client will automatically retry on these status codes:\n\n- `429` - Too Many Requests\n- `500` - Internal Server Error\n- `502` - Bad Gateway\n- `503` - Service Unavailable\n- `504` - Gateway Timeout\n\n```typescript\nconst client = new Graphlit(); // Uses default retry configuration\n```\n\n### Custom Retry Configuration\n\nConfigure retry behavior to match your needs:\n\n```typescript\nconst client = new Graphlit({\n  organizationId: \"your_org_id\",\n  environmentId: \"your_env_id\",\n  jwtSecret: \"your_secret\",\n  retryConfig: {\n    maxAttempts: 10, // Maximum retry attempts (default: 5)\n    initialDelay: 500, // Initial delay in ms (default: 300)\n    maxDelay: 60000, // Maximum delay in ms (default: 30000)\n    jitter: true, // Add randomness to delays (default: true)\n    retryableStatusCodes: [429, 500, 502, 503, 504], // Custom status codes\n    onRetry: (attempt, error, operation) =\u003e {\n      console.log(`Retry attempt ${attempt} for ${operation.operationName}`);\n      console.log(`Error: ${error.message}`);\n    },\n  },\n});\n```\n\n### Update Retry Configuration at Runtime\n\nChange retry behavior on the fly:\n\n```typescript\n// Start with default configuration\nconst client = new Graphlit();\n\n// Later, update for more aggressive retries\nclient.setRetryConfig({\n  maxAttempts: 20,\n  initialDelay: 100,\n  retryableStatusCodes: [429, 500, 502, 503, 504, 521, 522, 524],\n});\n```\n\n### Disable Retries\n\nFor testing or specific scenarios:\n\n```typescript\nconst client = new Graphlit({\n  organizationId: \"your_org_id\",\n  environmentId: \"your_env_id\",\n  jwtSecret: \"your_secret\",\n  retryConfig: {\n    maxAttempts: 1, // No retries\n  },\n});\n```\n\n## Streaming Provider Support\n\nThe Graphlit SDK supports real-time streaming responses from 9 different LLM providers. Each provider requires its specific SDK and API key:\n\n### Supported Providers\n\n| Provider        | Models                                        | SDK Required                      | API Key             |\n| --------------- | --------------------------------------------- | --------------------------------- | ------------------- |\n| **OpenAI**      | GPT-4, GPT-4o, GPT-4.1, O1, O3, O4            | `openai`                          | `OPENAI_API_KEY`    |\n| **Anthropic**   | Claude 3, Claude 3.5, Claude 3.7, Claude 4    | `@anthropic-ai/sdk`               | `ANTHROPIC_API_KEY` |\n| **Google**      | Gemini 1.5, Gemini 2.0, Gemini 2.5            | `@google/genai`                   | `GOOGLE_API_KEY`    |\n| **Groq**        | Llama 4, Llama 3.3, Mixtral, Deepseek R1      | `groq-sdk`                        | `GROQ_API_KEY`      |\n| **Cerebras**    | Llama 3.3, Llama 3.1                          | `openai`                          | `CEREBRAS_API_KEY`  |\n| **Cohere**      | Command R+, Command R, Command R7B, Command A | `cohere-ai`                       | `COHERE_API_KEY`    |\n| **Mistral**     | Mistral Large, Medium, Small, Nemo, Pixtral   | `@mistralai/mistralai`            | `MISTRAL_API_KEY`   |\n| **AWS Bedrock** | Nova Premier/Pro, Claude 3.7, Llama 4         | `@aws-sdk/client-bedrock-runtime` | AWS credentials     |\n| **Deepseek**    | Deepseek Chat, Deepseek Reasoner              | `openai`                          | `DEEPSEEK_API_KEY`  |\n\n### Setting Up Streaming\n\nEach provider requires both the SDK installation and proper client setup:\n\n```typescript\nimport { Graphlit } from \"graphlit-client\";\n\nconst client = new Graphlit();\n\n// Example: Set up multiple streaming providers\nif (process.env.OPENAI_API_KEY) {\n  const { OpenAI } = await import(\"openai\");\n  client.setOpenAIClient(new OpenAI());\n}\n\nif (process.env.COHERE_API_KEY) {\n  const { CohereClientV2 } = await import(\"cohere-ai\");\n  client.setCohereClient(\n    new CohereClientV2({ token: process.env.COHERE_API_KEY }),\n  );\n}\n\nif (process.env.GROQ_API_KEY) {\n  const { Groq } = await import(\"groq-sdk\");\n  client.setGroqClient(new Groq({ apiKey: process.env.GROQ_API_KEY }));\n}\n\n// Then create specifications for any provider\nconst spec = await client.createSpecification({\n  name: \"Multi-Provider Assistant\",\n  type: Types.SpecificationTypes.Completion,\n  serviceType: Types.ModelServiceTypes.Cohere, // or any supported provider\n  cohere: {\n    model: Types.CohereModels.CommandRPlus,\n    temperature: 0.7,\n  },\n});\n```\n\n### Provider-Specific Notes\n\n- **OpenAI-Compatible**: Groq, Cerebras, and Deepseek use OpenAI-compatible APIs\n- **AWS Bedrock**: Requires AWS credentials and uses the Converse API for streaming\n- **Cohere**: Supports both chat and tool calling with Command models\n- **Google**: Includes advanced multimodal capabilities with Gemini models\n- **Mistral**: Supports both text and vision models (Pixtral)\n\n## Basic Examples\n\n### 1. Chat with AI\n\nSimple conversation with streaming responses:\n\n```typescript\nimport { Graphlit, Types } from \"graphlit-client\";\n\nconst client = new Graphlit(); // Uses env vars: GRAPHLIT_ORGANIZATION_ID, GRAPHLIT_ENVIRONMENT_ID, GRAPHLIT_JWT_SECRET\n\n// Create a specification for the AI model\nconst spec = await client.createSpecification({\n  name: \"Assistant\",\n  type: Types.SpecificationTypes.Completion,\n  serviceType: Types.ModelServiceTypes.OpenAi,\n  openAI: {\n    model: Types.OpenAiModels.Gpt4O_128K,\n    temperature: 0.7,\n  },\n});\n\n// Chat with streaming\nawait client.streamAgent(\n  \"What can you help me with?\",\n  (event) =\u003e {\n    if (event.type === \"message_update\") {\n      // Print the AI's response as it streams\n      process.stdout.write(event.message.message);\n    }\n  },\n  undefined, // conversationId\n  { id: spec.createSpecification.id }, // specification\n);\n```\n\n### 2. Ingest and Query Documents\n\nUpload a PDF and ask questions about it:\n\n```typescript\nimport { Graphlit, Types } from \"graphlit-client\";\n\nconst client = new Graphlit(); // Uses env vars: GRAPHLIT_ORGANIZATION_ID, GRAPHLIT_ENVIRONMENT_ID, GRAPHLIT_JWT_SECRET\n\n// Create a specification\nconst spec = await client.createSpecification({\n  name: \"Document Q\u0026A\",\n  type: Types.SpecificationTypes.Completion,\n  serviceType: Types.ModelServiceTypes.OpenAi,\n  openAI: {\n    model: Types.OpenAiModels.Gpt4O_128K,\n  },\n});\n\n// Upload a PDF synchronously to ensure it's ready\nconst content = await client.ingestUri(\n  \"https://arxiv.org/pdf/1706.03762.pdf\", // Attention Is All You Need paper\n  \"AI Research Paper\", // name\n  undefined, // id\n  true, // isSynchronous - waits for processing\n);\n\nconsole.log(`✅ Uploaded: ${content.ingestUri.id}`);\n\n// Wait a moment for content to be fully indexed\nawait new Promise((resolve) =\u003e setTimeout(resolve, 5000));\n\n// Create a conversation that filters to this specific content\nconst conversation = await client.createConversation({\n  filter: { contents: [{ id: content.ingestUri.id }] },\n});\n\n// Ask questions about the PDF\nawait client.streamAgent(\n  \"What are the key innovations in this paper?\",\n  (event) =\u003e {\n    if (event.type === \"message_update\") {\n      console.log(event.message.message);\n    }\n  },\n  conversation.createConversation.id, // conversationId with content filter\n  { id: spec.createSpecification.id }, // specification\n);\n```\n\n### 3. Web Scraping\n\nExtract content from websites:\n\n```typescript\n// Scrape a website (waits for processing to complete)\nconst webpage = await client.ingestUri(\n  \"https://en.wikipedia.org/wiki/Artificial_intelligence\", // uri\n  \"AI Wikipedia Page\", // name\n  undefined, // id\n  true, // isSynchronous\n);\n\n// Wait for content to be indexed\nawait new Promise((resolve) =\u003e setTimeout(resolve, 5000));\n\n// Create a conversation filtered to this content\nconst conversation = await client.createConversation({\n  filter: { contents: [{ id: webpage.ingestUri.id }] },\n});\n\n// Ask about the specific content\nconst response = await client.promptAgent(\n  \"Summarize the key points about AI from this Wikipedia page\",\n  conversation.createConversation.id, // conversationId with filter\n  { id: spec.createSpecification.id }, // specification (create one as shown above)\n);\n\nconsole.log(response.message);\n```\n\n### 4. Multiple Provider Streaming\n\nCompare responses from different LLM providers:\n\n```typescript\nimport { Graphlit, Types } from \"graphlit-client\";\n\nconst client = new Graphlit();\n\n// Set up multiple providers\nif (process.env.OPENAI_API_KEY) {\n  const { OpenAI } = await import(\"openai\");\n  client.setOpenAIClient(new OpenAI());\n}\n\nif (process.env.COHERE_API_KEY) {\n  const { CohereClientV2 } = await import(\"cohere-ai\");\n  client.setCohereClient(\n    new CohereClientV2({ token: process.env.COHERE_API_KEY }),\n  );\n}\n\nif (process.env.GROQ_API_KEY) {\n  const { Groq } = await import(\"groq-sdk\");\n  client.setGroqClient(new Groq({ apiKey: process.env.GROQ_API_KEY }));\n}\n\n// Create specifications for different providers\nconst providers = [\n  {\n    name: \"OpenAI GPT-4o\",\n    serviceType: Types.ModelServiceTypes.OpenAi,\n    openAI: { model: Types.OpenAiModels.Gpt4O_128K },\n  },\n  {\n    name: \"Cohere Command R+\",\n    serviceType: Types.ModelServiceTypes.Cohere,\n    cohere: { model: Types.CohereModels.CommandRPlus },\n  },\n  {\n    name: \"Groq Llama\",\n    serviceType: Types.ModelServiceTypes.Groq,\n    groq: { model: Types.GroqModels.Llama_3_3_70B },\n  },\n];\n\n// Compare responses\nfor (const provider of providers) {\n  console.log(`\\n🤖 ${provider.name}:`);\n\n  const spec = await client.createSpecification({\n    ...provider,\n    type: Types.SpecificationTypes.Completion,\n  });\n\n  await client.streamAgent(\n    \"Explain quantum computing in simple terms\",\n    (event) =\u003e {\n      if (event.type === \"message_update\") {\n        process.stdout.write(event.message.message);\n      }\n    },\n    undefined,\n    { id: spec.createSpecification.id },\n  );\n}\n```\n\n### 5. Reasoning + Cancellation Example\n\nCombine reasoning detection with cancellable streams:\n\n```typescript\nimport { Graphlit, Types } from \"graphlit-client\";\n\nconst client = new Graphlit();\nconst controller = new AbortController();\n\n// Create spec for reasoning model\nconst spec = await client.createSpecification({\n  name: \"Reasoning Demo\",\n  serviceType: Types.ModelServiceTypes.Bedrock,\n  bedrock: {\n    model: Types.BedrockModels.NovaPremier,\n  },\n});\n\n// UI elements\nconst stopButton = document.getElementById(\"stop-reasoning\");\nconst reasoningDiv = document.getElementById(\"reasoning\");\nconst answerDiv = document.getElementById(\"answer\");\n\nstopButton.onclick = () =\u003e {\n  controller.abort();\n  console.log(\"🛑 Cancelled!\");\n};\n\ntry {\n  await client.streamAgent(\n    \"Solve this puzzle: If it takes 5 machines 5 minutes to make 5 widgets, how long does it take 100 machines to make 100 widgets? Think through this step-by-step.\",\n    (event) =\u003e {\n      switch (event.type) {\n        case \"reasoning_update\":\n          // Show the AI's thought process\n          reasoningDiv.textContent = event.content;\n          if (event.isComplete) {\n            reasoningDiv.classList.add(\"complete\");\n          }\n          break;\n\n        case \"message_update\":\n          // Show the final answer\n          answerDiv.textContent = event.message.message;\n          break;\n\n        case \"conversation_completed\":\n          stopButton.disabled = true;\n          console.log(\"✅ Complete!\");\n          break;\n      }\n    },\n    undefined,\n    { id: spec.createSpecification!.id },\n    undefined,\n    undefined,\n    { abortSignal: controller.signal },\n  );\n} catch (error) {\n  if (controller.signal.aborted) {\n    console.log(\"Reasoning cancelled by user\");\n  }\n}\n```\n\n### 6. Tool Calling\n\nLet AI call functions to get real-time data:\n\n```typescript\nimport { Graphlit, Types } from \"graphlit-client\";\n\nconst client = new Graphlit(); // Uses env vars: GRAPHLIT_ORGANIZATION_ID, GRAPHLIT_ENVIRONMENT_ID, GRAPHLIT_JWT_SECRET\n\n// Define a weather tool\nconst weatherTool: Types.ToolDefinitionInput = {\n  name: \"get_weather\",\n  description: \"Get current weather for a city\",\n  schema: JSON.stringify({\n    type: \"object\",\n    properties: {\n      city: { type: \"string\", description: \"City name\" },\n    },\n    required: [\"city\"],\n  }),\n};\n\n// Tool implementation\nconst toolHandlers = {\n  get_weather: async (args: { city: string }) =\u003e {\n    // Call your weather API here\n    return {\n      city: args.city,\n      temperature: 72,\n      condition: \"sunny\",\n    };\n  },\n};\n\n// Create a specification for tool calling\nconst spec = await client.createSpecification({\n  name: \"Weather Assistant\",\n  type: Types.SpecificationTypes.Completion,\n  serviceType: Types.ModelServiceTypes.OpenAi,\n  openAI: {\n    model: Types.OpenAiModels.Gpt4O_128K,\n  },\n});\n\n// Chat with tools\nawait client.streamAgent(\n  \"What's the weather in San Francisco?\",\n  (event) =\u003e {\n    if (event.type === \"tool_update\" \u0026\u0026 event.status === \"completed\") {\n      console.log(`🔧 Called ${event.toolCall.name}`);\n    } else if (event.type === \"message_update\") {\n      console.log(event.message.message);\n    }\n  },\n  undefined, // conversationId\n  { id: spec.createSpecification.id }, // specification\n  [weatherTool], // tools\n  toolHandlers, // handlers\n);\n```\n\n## Common Use Cases\n\n### Build a Knowledge Base Assistant\n\nCreate an AI that answers questions from your documents:\n\n```typescript\nimport { Graphlit, Types } from \"graphlit-client\";\n\nclass KnowledgeAssistant {\n  private client: Graphlit;\n  private conversationId?: string;\n  private specificationId?: string;\n  private contentIds: string[] = [];\n\n  constructor() {\n    this.client = new Graphlit(); // Uses env vars: GRAPHLIT_ORGANIZATION_ID, GRAPHLIT_ENVIRONMENT_ID, GRAPHLIT_JWT_SECRET\n  }\n\n  async initialize() {\n    // Create a specification for the assistant\n    const spec = await this.client.createSpecification({\n      name: \"Knowledge Assistant\",\n      type: Types.SpecificationTypes.Completion,\n      serviceType: Types.ModelServiceTypes.OpenAi,\n      openAI: {\n        model: Types.OpenAiModels.Gpt4O_128K,\n        temperature: 0.7,\n      },\n    });\n    this.specificationId = spec.createSpecification?.id;\n  }\n\n  async uploadDocuments(urls: string[]) {\n    console.log(\"📚 Uploading documents...\");\n\n    for (const url of urls) {\n      const content = await this.client.ingestUri(\n        url, // uri\n        url.split(\"/\").pop() || \"Document\", // name\n        undefined, // id\n        true, // isSynchronous - wait for processing\n      );\n      this.contentIds.push(content.ingestUri.id);\n    }\n\n    console.log(\"✅ Documents uploaded!\");\n\n    // Wait for content to be indexed\n    await new Promise((resolve) =\u003e setTimeout(resolve, 5000));\n  }\n\n  async ask(question: string) {\n    // Create conversation with content filter if not exists\n    if (!this.conversationId \u0026\u0026 this.contentIds.length \u003e 0) {\n      const conversation = await this.client.createConversation({\n        filter: { contents: this.contentIds.map((id) =\u003e ({ id })) },\n      });\n      this.conversationId = conversation.createConversation?.id;\n    }\n\n    await this.client.streamAgent(\n      question,\n      (event) =\u003e {\n        if (event.type === \"conversation_started\" \u0026\u0026 !this.conversationId) {\n          this.conversationId = event.conversationId;\n        } else if (event.type === \"message_update\") {\n          process.stdout.write(event.message.message);\n        }\n      },\n      this.conversationId, // Maintains conversation context\n      { id: this.specificationId! }, // specification\n    );\n  }\n}\n\n// Usage\nconst assistant = new KnowledgeAssistant();\nawait assistant.initialize();\n\n// Upload your documents\nawait assistant.uploadDocuments([\n  \"https://arxiv.org/pdf/2103.15348.pdf\",\n  \"https://arxiv.org/pdf/1706.03762.pdf\",\n]);\n\n// Ask questions\nawait assistant.ask(\"What are these papers about?\");\nawait assistant.ask(\"How do they relate to each other?\");\n```\n\n### Extract Data from Documents\n\nExtract specific information from uploaded content:\n\n```typescript\n// Upload a document synchronously\nconst document = await client.ingestUri(\n  \"https://example.com/document.pdf\", // uri\n  \"Document #12345\", // name\n  undefined, // id\n  true, // isSynchronous\n);\n\n// Wait for content to be indexed\nawait new Promise((resolve) =\u003e setTimeout(resolve, 5000));\n\n// Extract specific data\nconst extraction = await client.extractContents(\n  \"Extract the key information from this document\",\n  undefined, // tools\n  undefined, // specification\n  { contents: [{ id: document.ingestUri.id }] }, // filter\n);\n\nconsole.log(\"Extracted data:\", extraction.extractContents);\n```\n\n### Summarize Multiple Documents\n\nCreate summaries across multiple files:\n\n```typescript\n// Upload multiple documents synchronously\nconst ids: string[] = [];\n\nfor (const url of documentUrls) {\n  const content = await client.ingestUri(\n    url, // uri\n    url.split(\"/\").pop() || \"Document\", // name\n    undefined, // id\n    true, // isSynchronous\n  );\n  ids.push(content.ingestUri.id);\n}\n\n// Generate a summary across all documents\nconst summary = await client.summarizeContents(\n  [\n    {\n      type: Types.SummarizationTypes.Custom,\n      prompt: \"Create an executive summary of these documents\",\n    },\n  ], // summarizations\n  { contents: ids.map((id) =\u003e ({ id })) }, // filter\n);\n\nconsole.log(\"Summary:\", summary.summarizeContents);\n```\n\n### Processing Options\n\n```typescript\n// Option 1: Synchronous processing (simpler)\nconst content = await client.ingestUri(\n  \"https://example.com/large-document.pdf\", // uri\n  undefined, // name\n  undefined, // id\n  true, // isSynchronous\n);\nconsole.log(\"✅ Content ready!\");\n\n// Option 2: Asynchronous processing (for large files)\nconst content = await client.ingestUri(\n  \"https://example.com/very-large-video.mp4\", // uri\n  // isSynchronous defaults to false\n);\n\n// Check status later\nlet isReady = false;\nwhile (!isReady) {\n  const status = await client.isContentDone(content.ingestUri.id);\n  isReady = status.isContentDone?.result || false;\n\n  if (!isReady) {\n    console.log(\"⏳ Still processing...\");\n    await new Promise((resolve) =\u003e setTimeout(resolve, 2000));\n  }\n}\nconsole.log(\"✅ Content ready!\");\n```\n\n## Advanced Agent Features\n\n### Using Content Filters\n\nControl what content the agent can access during conversations:\n\n```typescript\n// Example 1: Chat with specific documents only\nconst result = await client.promptAgent(\n  \"What are the main points in these documents?\",\n  undefined, // conversationId - will create new\n  { id: specificationId },\n  undefined, // tools\n  undefined, // toolHandlers\n  undefined, // options\n  undefined, // mimeType\n  undefined, // data\n  {\n    // Only allow retrieval from specific content\n    contents: [{ id: \"content-id-1\" }, { id: \"content-id-2\" }],\n  },\n);\n\n// Example 2: Streaming with content filter\nawait client.streamAgent(\n  \"Explain the technical details\",\n  (event) =\u003e {\n    if (event.type === \"message_update\") {\n      process.stdout.write(event.message.message);\n    }\n  },\n  undefined, // conversationId\n  { id: specificationId },\n  undefined, // tools\n  undefined, // toolHandlers\n  undefined, // options\n  undefined, // mimeType\n  undefined, // data\n  {\n    // Filter by collection\n    collections: [{ id: \"technical-docs-collection\" }],\n  },\n);\n```\n\n### Using Augmented Filters\n\nForce specific content into the LLM context without retrieval:\n\n```typescript\n// Example: Chat with a specific file always in context\nconst fileContent = await client.getContent(\"file-content-id\");\n\nawait client.streamAgent(\n  \"What patterns do you see in this code?\",\n  (event) =\u003e {\n    if (event.type === \"message_update\") {\n      process.stdout.write(event.message.message);\n    }\n  },\n  undefined, // conversationId\n  { id: specificationId },\n  undefined, // tools\n  undefined, // toolHandlers\n  undefined, // options\n  undefined, // mimeType\n  undefined, // data\n  undefined, // contentFilter\n  {\n    // Force this content into context\n    contents: [{ id: fileContent.content.id }],\n  },\n);\n```\n\n### Combining Filters\n\nUse both filters for precise control:\n\n```typescript\n// Chat about specific code with documentation available\nawait client.promptAgent(\n  \"How does this code implement the algorithm described in the docs?\",\n  undefined,\n  { id: specificationId },\n  undefined,\n  undefined,\n  undefined,\n  undefined,\n  undefined,\n  {\n    // Can retrieve from documentation\n    collections: [{ id: \"algorithm-docs\" }],\n  },\n  {\n    // Always include the specific code file\n    contents: [{ id: \"implementation-file-id\" }],\n  },\n);\n```\n\n## Advanced Workflows\n\n### Creating Workflows for Content Processing\n\nWorkflows automatically process content when ingested:\n\n```typescript\nimport { Graphlit, Types } from \"graphlit-client\";\n\nconst client = new Graphlit(); // Uses env vars: GRAPHLIT_ORGANIZATION_ID, GRAPHLIT_ENVIRONMENT_ID, GRAPHLIT_JWT_SECRET\n\n// Create specifications for AI models\nconst summarizationSpec = await client.createSpecification({\n  name: \"Summarizer\",\n  type: Types.SpecificationTypes.Summarization,\n  serviceType: Types.ModelServiceTypes.OpenAi,\n  openAI: {\n    model: Types.OpenAiModels.Gpt4O_128K,\n  },\n});\n\n// Create a workflow that summarizes all content\nconst workflow = await client.createWorkflow({\n  name: \"Document Intelligence\",\n  preparation: {\n    summarizations: [\n      {\n        type: Types.SummarizationTypes.Summary,\n        specification: { id: summarizationSpec.createSpecification.id },\n      },\n    ],\n  },\n});\n\n// Set workflow as default for project\nawait client.updateProject({\n  workflow: { id: workflow.createWorkflow.id },\n});\n\n// Now all content will be automatically summarized\nconst content = await client.ingestUri(\n  \"https://example.com/report.pdf\", // uri\n);\n```\n\n### Creating Specifications\n\nSpecifications configure how AI models behave:\n\n```typescript\nimport { Graphlit, Types } from \"graphlit-client\";\n\n// Create a conversational AI specification\nconst conversationSpec = await client.createSpecification({\n  name: \"Customer Support AI\",\n  type: Types.SpecificationTypes.Completion,\n  serviceType: Types.ModelServiceTypes.OpenAi,\n  systemPrompt: \"You are a helpful customer support assistant.\",\n  openAI: {\n    model: Types.OpenAiModels.Gpt4O_128K,\n    temperature: 0.7,\n    completionTokenLimit: 2000,\n  },\n});\n\n// Use the specification in conversations\nawait client.streamAgent(\n  \"How do I reset my password?\",\n  (event) =\u003e {\n    if (event.type === \"message_update\") {\n      console.log(event.message.message);\n    }\n  },\n  undefined,\n  { id: conversationSpec.createSpecification.id },\n);\n```\n\n## API Reference\n\n### Client Methods\n\n```typescript\nconst client = new Graphlit(organizationId?, environmentId?, jwtSecret?);\n```\n\n#### Content Operations\n\n- `ingestUri(uri, name?, id?, isSynchronous?, ...)` - Ingest content from URL\n- `ingestText(text, name?, textType?, ...)` - Ingest text content directly\n- `queryContents(filter?)` - Search and query content\n- `getContent(id)` - Get content by ID\n- `deleteContent(id)` - Delete content\n- `extractContents(prompt, tools, specification?, filter?)` - Extract data from content\n- `summarizeContents(summarizations, filter?)` - Summarize content\n- `isContentDone(id)` - Check if content processing is complete\n\n#### Conversation Operations\n\n- `createConversation(input?)` - Create a new conversation\n- `streamAgent(prompt, handler, ...)` - Stream AI responses\n- `promptAgent(prompt, ...)` - Get AI response without streaming\n- `deleteConversation(id)` - Delete conversation\n\n#### Specification Operations\n\n- `createSpecification(input)` - Create AI model configuration\n- `querySpecifications(filter?)` - List specifications\n- `deleteSpecification(id)` - Delete specification\n\n#### Workflow Operations\n\n- `createWorkflow(input)` - Create content processing workflow\n- `queryWorkflows(filter?)` - List workflows\n- `updateProject(input)` - Update project settings\n\n### Event Types\n\n```typescript\ntype AgentStreamEvent =\n  | { type: \"conversation_started\"; conversationId: string }\n  | { type: \"message_update\"; message: { message: string } }\n  | { type: \"tool_update\"; toolCall: any; status: string }\n  | {\n      type: \"reasoning_update\";\n      content: string;\n      format: \"thinking_tag\" | \"markdown\" | \"custom\";\n      isComplete: boolean;\n    }\n  | {\n      type: \"context_window\";\n      usage: { usedTokens: number; maxTokens: number; percentage: number };\n    }\n  | { type: \"conversation_completed\"; message: { message: string } }\n  | { type: \"error\"; error: { message: string; recoverable: boolean } };\n```\n\n## Testing \u0026 Examples\n\nAll examples in this README are tested and verified. See [`test/readme-simple.test.ts`](test/readme-simple.test.ts) for runnable versions of these examples.\n\nTo run the examples yourself:\n\n```bash\n# Clone the repository\ngit clone https://github.com/graphlit/graphlit-client-typescript.git\ncd graphlit-client-typescript\n\n# Install dependencies\nnpm install\n\n# Set up your environment variables\ncp .env.example .env\n# Edit .env with your Graphlit credentials\n\n# Run the examples\nnpm test test/readme-simple.test.ts\n```\n\n## Support\n\n- 📖 **Documentation**: [https://docs.graphlit.dev/](https://docs.graphlit.dev/)\n- 💬 **Discord Community**: [Join our Discord](https://discord.gg/ygFmfjy3Qx)\n- 🐛 **Issues**: [GitHub Issues](https://github.com/graphlit/graphlit-client-typescript/issues)\n- 📧 **Email**: support@graphlit.com\n\n## License\n\nMIT License - see LICENSE file for details.\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgraphlit%2Fgraphlit-client-typescript","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fgraphlit%2Fgraphlit-client-typescript","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgraphlit%2Fgraphlit-client-typescript/lists"}