{"id":32565825,"url":"https://github.com/stackql/stackql-provider-anthropic","last_synced_at":"2025-10-29T04:53:40.859Z","repository":{"id":315402657,"uuid":"1059337825","full_name":"stackql/stackql-provider-anthropic","owner":"stackql","description":"generate stackql provider for Anthropic from openapi specs","archived":false,"fork":false,"pushed_at":"2025-09-18T10:28:10.000Z","size":298,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-09-18T12:39:36.762Z","etag":null,"topics":["anthropic","stackql","stackql-provider"],"latest_commit_sha":null,"homepage":"https://anthropic-provider.stackql.io/","language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/stackql.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-09-18T10:04:36.000Z","updated_at":"2025-09-18T10:28:14.000Z","dependencies_parsed_at":"2025-09-18T12:39:43.055Z","dependency_job_id":"598ad4c3-af6b-4483-b305-490706912946","html_url":"https://github.com/stackql/stackql-provider-anthropic","commit_stats":null,"previous_names":["stackql/stackql-provider-anthropic"],"tags_count":null,"template":false,"template_full_name":"stackql/stackql-provider-TEMPLATE","purl":"pkg:github/stackql/stackql-provider-anthropic","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/stackql%2Fstackql-provider-anthropic","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/stackql%2Fstackql-provider-anthropic/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/stackql%2Fstackql-provider-anthropic/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/stackql%2Fstackql-provider-anthropic/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/stackql","download_url":"https://codeload.github.com/stackql/stackql-provider-anthropic/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/stackql%2Fstackql-provider-anthropic/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":281563798,"owners_count":26522704,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-29T02:00:06.901Z","response_time":59,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["anthropic","stackql","stackql-provider"],"created_at":"2025-10-29T04:53:35.480Z","updated_at":"2025-10-29T04:53:40.850Z","avatar_url":"https://github.com/stackql.png","language":"JavaScript","readme":"# `anthropic` provider for [`stackql`](https://github.com/stackql/stackql)\r\n\r\nThis repository is used to generate and document the `anthropic` provider for StackQL, allowing you to query and interact with Anthropic's services using SQL-like syntax. The provider is built using the `@stackql/provider-utils` package, which provides tools for converting OpenAPI specifications into StackQL-compatible provider schemas.\r\n\r\n## Prerequisites\r\n\r\nTo use the Anthropic provider with StackQL, you'll need:\r\n\r\n1. An Anthropic account with appropriate API credentials\r\n2. An Anthropic API key with sufficient permissions for the resources you want to access\r\n3. StackQL CLI installed on your system (see [StackQL](https://github.com/stackql/stackql))\r\n\r\n## 1. Create the Open API Specification\r\n\r\nSince Anthropic doesn't currently provide an official OpenAPI specification, we need to create one based on their API documentation:\r\n\r\n```bash\r\nmkdir -p provider-dev/downloaded\r\n\r\n# Create a JSON OpenAPI spec based on Anthropic's API documentation\r\ncat \u003e provider-dev/downloaded/anthropic-openapi.json \u003c\u003c 'EOF'\r\n{\r\n  \"openapi\": \"3.0.0\",\r\n  \"info\": {\r\n    \"title\": \"Anthropic API\",\r\n    \"description\": \"Anthropic API for Claude and other AI models\",\r\n    \"version\": \"1.0.0\"\r\n  },\r\n  \"servers\": [\r\n    {\r\n      \"url\": \"https://api.anthropic.com\"\r\n    }\r\n  ],\r\n  \"components\": {\r\n    \"securitySchemes\": {\r\n      \"ApiKeyAuth\": {\r\n        \"type\": \"apiKey\",\r\n        \"in\": \"header\",\r\n        \"name\": \"x-api-key\"\r\n      },\r\n      \"AnthropicVersionHeader\": {\r\n        \"type\": \"apiKey\",\r\n        \"in\": \"header\",\r\n        \"name\": \"anthropic-version\"\r\n      }\r\n    }\r\n  },\r\n  \"security\": [\r\n    {\r\n      \"ApiKeyAuth\": [],\r\n      \"AnthropicVersionHeader\": []\r\n    }\r\n  ],\r\n  \"paths\": {\r\n    \"/v1/messages\": {\r\n      \"post\": {\r\n        \"operationId\": \"createMessage\",\r\n        \"summary\": \"Create a message\",\r\n        \"description\": \"Create a message and receive a response from Claude\",\r\n        \"tags\": [\"messages\"],\r\n        \"requestBody\": {\r\n          \"required\": true,\r\n          \"content\": {\r\n            \"application/json\": {\r\n              \"schema\": {\r\n                \"type\": \"object\",\r\n                \"required\": [\"model\", \"messages\"],\r\n                \"properties\": {\r\n                  \"model\": {\r\n                    \"type\": \"string\",\r\n                    \"description\": \"The model that will complete your prompt\",\r\n                    \"example\": \"claude-3-opus-20240229\"\r\n                  },\r\n                  \"messages\": {\r\n                    \"type\": \"array\",\r\n                    \"description\": \"A list of messages comprising the conversation so far\",\r\n                    \"items\": {\r\n                      \"type\": \"object\",\r\n                      \"required\": [\"role\", \"content\"],\r\n                      \"properties\": {\r\n                        \"role\": {\r\n                          \"type\": \"string\",\r\n                          \"enum\": [\"user\", \"assistant\"],\r\n                          \"description\": \"The role of the message's author\"\r\n                        },\r\n                        \"content\": {\r\n                          \"type\": \"string\",\r\n                          \"description\": \"The content of the message\"\r\n                        }\r\n                      }\r\n                    }\r\n                  },\r\n                  \"max_tokens\": {\r\n                    \"type\": \"integer\",\r\n                    \"description\": \"The maximum number of tokens to generate\",\r\n                    \"default\": 1024\r\n                  },\r\n                  \"temperature\": {\r\n                    \"type\": \"number\",\r\n                    \"description\": \"Amount of randomness injected into the response\",\r\n                    \"default\": 1.0\r\n                  },\r\n                  \"system\": {\r\n                    \"type\": \"string\",\r\n                    \"description\": \"System prompt to guide Claude's behavior\"\r\n                  },\r\n                  \"metadata\": {\r\n                    \"type\": \"object\",\r\n                    \"description\": \"An object containing metadata about the request\"\r\n                  },\r\n                  \"stream\": {\r\n                    \"type\": \"boolean\",\r\n                    \"description\": \"Whether to stream the response\",\r\n                    \"default\": false\r\n                  }\r\n                }\r\n              }\r\n            }\r\n          }\r\n        },\r\n        \"responses\": {\r\n          \"200\": {\r\n            \"description\": \"Message created successfully\",\r\n            \"content\": {\r\n              \"application/json\": {\r\n                \"schema\": {\r\n                  \"type\": \"object\",\r\n                  \"properties\": {\r\n                    \"id\": {\r\n                      \"type\": \"string\",\r\n                      \"description\": \"The identifier for the message\"\r\n                    },\r\n                    \"type\": {\r\n                      \"type\": \"string\",\r\n                      \"description\": \"The type of object\"\r\n                    },\r\n                    \"role\": {\r\n                      \"type\": \"string\",\r\n                      \"description\": \"The role of the message author\"\r\n                    },\r\n                    \"content\": {\r\n                      \"type\": \"array\",\r\n                      \"description\": \"The content of the message\",\r\n                      \"items\": {\r\n                        \"type\": \"object\",\r\n                        \"properties\": {\r\n                          \"type\": {\r\n                            \"type\": \"string\",\r\n                            \"description\": \"The type of content\"\r\n                          },\r\n                          \"text\": {\r\n                            \"type\": \"string\",\r\n                            \"description\": \"The text content\"\r\n                          }\r\n                        }\r\n                      }\r\n                    },\r\n                    \"model\": {\r\n                      \"type\": \"string\",\r\n                      \"description\": \"The model used\"\r\n                    },\r\n                    \"stop_reason\": {\r\n                      \"type\": \"string\",\r\n                      \"description\": \"The reason the model stopped generating\"\r\n                    },\r\n                    \"usage\": {\r\n                      \"type\": \"object\",\r\n                      \"description\": \"Usage statistics for the request\",\r\n                      \"properties\": {\r\n                        \"input_tokens\": {\r\n                          \"type\": \"integer\",\r\n                          \"description\": \"Number of tokens in the input\"\r\n                        },\r\n                        \"output_tokens\": {\r\n                          \"type\": \"integer\",\r\n                          \"description\": \"Number of tokens in the output\"\r\n                        }\r\n                      }\r\n                    }\r\n                  }\r\n                }\r\n              }\r\n            }\r\n          }\r\n        }\r\n      }\r\n    },\r\n    \"/v1/completions\": {\r\n      \"post\": {\r\n        \"operationId\": \"createCompletion\",\r\n        \"summary\": \"Create a completion\",\r\n        \"description\": \"Create a completion (legacy API)\",\r\n        \"tags\": [\"completions\"],\r\n        \"requestBody\": {\r\n          \"required\": true,\r\n          \"content\": {\r\n            \"application/json\": {\r\n              \"schema\": {\r\n                \"type\": \"object\",\r\n                \"required\": [\"model\", \"prompt\"],\r\n                \"properties\": {\r\n                  \"model\": {\r\n                    \"type\": \"string\",\r\n                    \"description\": \"The model that will complete your prompt\"\r\n                  },\r\n                  \"prompt\": {\r\n                    \"type\": \"string\",\r\n                    \"description\": \"The prompt to complete\"\r\n                  },\r\n                  \"max_tokens_to_sample\": {\r\n                    \"type\": \"integer\",\r\n                    \"description\": \"The maximum number of tokens to generate\",\r\n                    \"default\": 1024\r\n                  },\r\n                  \"temperature\": {\r\n                    \"type\": \"number\",\r\n                    \"description\": \"Amount of randomness injected into the response\",\r\n                    \"default\": 1.0\r\n                  },\r\n                  \"stop_sequences\": {\r\n                    \"type\": \"array\",\r\n                    \"description\": \"Sequences that will cause the model to stop generating\",\r\n                    \"items\": {\r\n                      \"type\": \"string\"\r\n                    }\r\n                  },\r\n                  \"stream\": {\r\n                    \"type\": \"boolean\",\r\n                    \"description\": \"Whether to stream the response\",\r\n                    \"default\": false\r\n                  }\r\n                }\r\n              }\r\n            }\r\n          }\r\n        },\r\n        \"responses\": {\r\n          \"200\": {\r\n            \"description\": \"Completion created successfully\",\r\n            \"content\": {\r\n              \"application/json\": {\r\n                \"schema\": {\r\n                  \"type\": \"object\",\r\n                  \"properties\": {\r\n                    \"completion\": {\r\n                      \"type\": \"string\",\r\n                      \"description\": \"The completion text\"\r\n                    },\r\n                    \"model\": {\r\n                      \"type\": \"string\",\r\n                      \"description\": \"The model used\"\r\n                    },\r\n                    \"stop_reason\": {\r\n                      \"type\": \"string\",\r\n                      \"description\": \"The reason the model stopped generating\"\r\n                    }\r\n                  }\r\n                }\r\n              }\r\n            }\r\n          }\r\n        }\r\n      }\r\n    },\r\n    \"/v1/models\": {\r\n      \"get\": {\r\n        \"operationId\": \"listModels\",\r\n        \"summary\": \"List models\",\r\n        \"description\": \"List available models\",\r\n        \"tags\": [\"models\"],\r\n        \"responses\": {\r\n          \"200\": {\r\n            \"description\": \"List of available models\",\r\n            \"content\": {\r\n              \"application/json\": {\r\n                \"schema\": {\r\n                  \"type\": \"object\",\r\n                  \"properties\": {\r\n                    \"models\": {\r\n                      \"type\": \"array\",\r\n                      \"description\": \"List of available models\",\r\n                      \"items\": {\r\n                        \"type\": \"object\",\r\n                        \"properties\": {\r\n                          \"name\": {\r\n                            \"type\": \"string\",\r\n                            \"description\": \"Model identifier\"\r\n                          },\r\n                          \"description\": {\r\n                            \"type\": \"string\",\r\n                            \"description\": \"Model description\"\r\n                          },\r\n                          \"context_window\": {\r\n                            \"type\": \"integer\",\r\n                            \"description\": \"Context window size in tokens\"\r\n                          },\r\n                          \"max_output_tokens\": {\r\n                            \"type\": \"integer\",\r\n                            \"description\": \"Maximum tokens in the output\"\r\n                          }\r\n                        }\r\n                      }\r\n                    }\r\n                  }\r\n                }\r\n              }\r\n            }\r\n          }\r\n        }\r\n      }\r\n    }\r\n  }\r\n}\r\nEOF\r\n```\r\n\r\n## 2. Split into Service Specs\r\n\r\nNext, split the OpenAPI specification into service-specific files:\r\n\r\n```bash\r\nrm -rf provider-dev/source/*\r\nnpm run split -- \\\r\n  --provider-name anthropic \\\r\n  --api-doc provider-dev/downloaded/anthropic-openapi.json \\\r\n  --svc-discriminator tag \\\r\n  --output-dir provider-dev/source \\\r\n  --overwrite \\\r\n  --svc-name-overrides \"$(cat \u003c\u003cEOF\r\n{\r\n  \"messages\": \"messages\",\r\n  \"completions\": \"completions\",\r\n  \"models\": \"models\"\r\n}\r\nEOF\r\n)\"\r\n```\r\n\r\n## 3. Generate Mappings\r\n\r\nGenerate the mapping configuration that connects OpenAPI operations to StackQL resources:\r\n\r\n```bash\r\nnpm run generate-mappings -- \\\r\n  --provider-name anthropic \\\r\n  --input-dir provider-dev/source \\\r\n  --output-dir provider-dev/config\r\n```\r\n\r\nUpdate the resultant `provider-dev/config/all_services.csv` to add the `stackql_resource_name`, `stackql_method_name`, `stackql_verb` values for each operation.\r\n\r\n## 4. Generate Provider\r\n\r\nThis step transforms the split OpenAPI service specs into a fully-functional StackQL provider by applying the resource and method mappings defined in your CSV file.\r\n\r\n```bash\r\nrm -rf provider-dev/openapi/*\r\nnpm run generate-provider -- \\\r\n  --provider-name anthropic \\\r\n  --input-dir provider-dev/source \\\r\n  --output-dir provider-dev/openapi/src/anthropic \\\r\n  --config-path provider-dev/config/all_services.csv \\\r\n  --servers '[{\"url\": \"https://api.anthropic.com\"}]' \\\r\n  --provider-config '{\"auth\": {\"type\": \"multi_header\", \"credentials\": {\"headers\": [{\"name\": \"x-api-key\", \"envVar\": \"ANTHROPIC_API_KEY\"}, {\"name\": \"anthropic-version\", \"value\": \"2023-06-01\"}]}}}' \\\r\n  --overwrite\r\n```\r\n\r\n## 5. Test Provider\r\n\r\n### Starting the StackQL Server\r\n\r\nBefore running tests, start a StackQL server with your provider:\r\n\r\n```bash\r\nPROVIDER_REGISTRY_ROOT_DIR=\"$(pwd)/provider-dev/openapi\"\r\nnpm run start-server -- --provider anthropic --registry $PROVIDER_REGISTRY_ROOT_DIR\r\n```\r\n\r\n### Test Meta Routes\r\n\r\nTest all metadata routes (services, resources, methods) in the provider:\r\n\r\n```bash\r\nnpm run test-meta-routes -- anthropic --verbose\r\n```\r\n\r\nWhen you're done testing, stop the StackQL server:\r\n\r\n```bash\r\nnpm run stop-server\r\n```\r\n\r\nUse this command to view the server status:\r\n\r\n```bash\r\nnpm run server-status\r\n```\r\n\r\n### Run test queries\r\n\r\nRun some test queries against the provider using the `stackql shell`:\r\n\r\n```bash\r\nPROVIDER_REGISTRY_ROOT_DIR=\"$(pwd)/provider-dev/openapi\"\r\nREG_STR='{\"url\": \"file://'${PROVIDER_REGISTRY_ROOT_DIR}'\", \"localDocRoot\": \"'${PROVIDER_REGISTRY_ROOT_DIR}'\", \"verifyConfig\": {\"nopVerify\": true}}'\r\n./stackql shell --registry=\"${REG_STR}\"\r\n```\r\n\r\nExample queries to try:\r\n\r\n```sql\r\n-- List available models\r\nSELECT \r\n*\r\nFROM anthropic.models.models;\r\n\r\n-- Create a message to get a response from Claude\r\nINSERT INTO anthropic.messages.messages (\r\n  json_data\r\n) VALUES (\r\n  '{\r\n    \"model\": \"claude-3-opus-20240229\",\r\n    \"messages\": [\r\n      {\r\n        \"role\": \"user\",\r\n        \"content\": \"What are the main differences between Claude 3 Opus and Claude 3 Sonnet?\"\r\n      }\r\n    ],\r\n    \"max_tokens\": 500,\r\n    \"temperature\": 0.7\r\n  }'\r\n);\r\n\r\n-- Create a completion (legacy API)\r\nINSERT INTO anthropic.completions.completions (\r\n  json_data\r\n) VALUES (\r\n  '{\r\n    \"model\": \"claude-2.1\",\r\n    \"prompt\": \"\\n\\nHuman: Explain quantum computing in simple terms\\n\\nAssistant: \",\r\n    \"max_tokens_to_sample\": 300,\r\n    \"temperature\": 0.7,\r\n    \"stop_sequences\": [\"\\n\\nHuman:\"]\r\n  }'\r\n);\r\n```\r\n\r\n## 6. Publish the provider\r\n\r\nTo publish the provider push the `anthropic` dir to `providers/src` in a feature branch of the [`stackql-provider-registry`](https://github.com/stackql/stackql-provider-registry). Follow the [registry release flow](https://github.com/stackql/stackql-provider-registry/blob/dev/docs/build-and-deployment.md).  \r\n\r\nLaunch the StackQL shell:\r\n\r\n```bash\r\nexport DEV_REG=\"{ \\\"url\\\": \\\"https://registry-dev.stackql.app/providers\\\" }\"\r\n./stackql --registry=\"${DEV_REG}\" shell\r\n```\r\n\r\nPull the latest dev `anthropic` provider:\r\n\r\n```sql\r\nregistry pull anthropic;\r\n```\r\n\r\nRun some test queries to verify the provider works as expected.\r\n\r\n## 7. Generate web docs\r\n\r\nProvider doc microsites are built using Docusaurus and published using GitHub Pages.  \r\n\r\na. Update `headerContent1.txt` and `headerContent2.txt` accordingly in `provider-dev/docgen/provider-data/`  \r\n\r\nb. Update the following in `website/docusaurus.config.js`:  \r\n\r\n```js\r\n// Provider configuration - change these for different providers\r\nconst providerName = \"anthropic\";\r\nconst providerTitle = \"Anthropic Provider\";\r\n```\r\n\r\nc. Then generate docs using...\r\n\r\n```bash\r\nnpm run generate-docs -- \\\r\n  --provider-name anthropic \\\r\n  --provider-dir ./provider-dev/openapi/src/anthropic/v00.00.00000 \\\r\n  --output-dir ./website \\\r\n  --provider-data-dir ./provider-dev/docgen/provider-data\r\n```  \r\n\r\n## 8. Test web docs locally\r\n\r\n```bash\r\ncd website\r\n# test build\r\nyarn build\r\n\r\n# run local dev server\r\nyarn start\r\n```\r\n\r\n## 9. Publish web docs to GitHub Pages\r\n\r\nUnder __Pages__ in the repository, in the __Build and deployment__ section select __GitHub Actions__ as the __Source__. In Netlify DNS create the following records:\r\n\r\n| Source Domain | Record Type  | Target |\r\n|---------------|--------------|--------|\r\n| anthropic-provider.stackql.io | CNAME | stackql.github.io. |\r\n\r\n## License\r\n\r\nMIT\r\n\r\n## Contributing\r\n\r\nContributions are welcome! Please feel free to submit a Pull Request.","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fstackql%2Fstackql-provider-anthropic","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fstackql%2Fstackql-provider-anthropic","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fstackql%2Fstackql-provider-anthropic/lists"}