{"id":14156102,"url":"https://github.com/samchon/openapi","last_synced_at":"2025-05-16T09:05:35.202Z","repository":{"id":231975738,"uuid":"777774184","full_name":"samchon/openapi","owner":"samchon","description":"OpenAPI definitions, converters and LLM function calling schema composer.","archived":false,"fork":false,"pushed_at":"2025-05-12T05:57:26.000Z","size":4560,"stargazers_count":104,"open_issues_count":4,"forks_count":7,"subscribers_count":3,"default_branch":"master","last_synced_at":"2025-05-15T09:50:29.760Z","etag":null,"topics":["ai","chatgpt","claude","deepseek","gemini","llama","llm","llm-function-call","mcp","model-context-protocol","nestia","nestjs","openapi","openapi-generator","openapi-generators","openapi-validator","structured-output","swagger","typescript","typia"],"latest_commit_sha":null,"homepage":"https://nestia.io/api/modules/_samchon_openapi.html","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/samchon.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null},"funding":{"github":["samchon"]}},"created_at":"2024-03-26T13:37:32.000Z","updated_at":"2025-05-12T05:57:29.000Z","dependencies_parsed_at":"2024-04-29T16:58:15.379Z","dependency_job_id":"22ec59bd-c833-43fb-8cbe-5ae1742b664a","html_url":"https://github.com/samchon/openapi","commit_stats":{"total_commits":117,"total_committers":3,"mean_commits":39.0,"dds":"0.017094017094017144","last_synced_commit":"6c6abd9fdb8dedc034c3055cf832e579696105e2"},"previous_names":["samchon/openapi"],"tags_count":59,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/samchon%2Fopenapi","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/samchon%2Fopenapi/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/samchon%2Fopenapi/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/samchon%2Fopenapi/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/samchon","download_url":"https://codeload.github.com/samchon/openapi/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254501557,"owners_count":22081528,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","chatgpt","claude","deepseek","gemini","llama","llm","llm-function-call","mcp","model-context-protocol","nestia","nestjs","openapi","openapi-generator","openapi-generators","openapi-validator","structured-output","swagger","typescript","typia"],"created_at":"2024-08-17T08:05:13.240Z","updated_at":"2025-05-16T09:05:30.186Z","avatar_url":"https://github.com/samchon.png","language":"TypeScript","readme":"# `@samchon/openapi`\n```mermaid\nflowchart\n  subgraph \"OpenAPI Specification\"\n    v20(\"Swagger v2.0\") --upgrades--\u003e emended[[\"OpenAPI v3.1 (emended)\"]]\n    v30(\"OpenAPI v3.0\") --upgrades--\u003e emended\n    v31(\"OpenAPI v3.1\") --emends--\u003e emended\n  end\n  subgraph \"OpenAPI Generator\"\n    emended --normalizes--\u003e migration[[\"Migration Schema\"]]\n    migration --\"Artificial Intelligence\"--\u003e lfc{{\"LLM Function Calling\"}}\n    lfc --\"OpenAI\"--\u003e chatgpt(\"ChatGPT\")\n    lfc --\"Anthropic\"--\u003e claude(\"Claude\")\n    lfc --\"Google\"--\u003e gemini(\"Gemini\")\n    lfc --\"Meta\"--\u003e llama(\"Llama\")\n  end\n```\n\n[![GitHub license](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/samchon/openapi/blob/master/LICENSE)\n[![npm version](https://img.shields.io/npm/v/@samchon/openapi.svg)](https://www.npmjs.com/package/@samchon/openapi)\n[![Downloads](https://img.shields.io/npm/dm/@samchon/openapi.svg)](https://www.npmjs.com/package/@samchon/openapi)\n[![Build Status](https://github.com/samchon/openapi/workflows/build/badge.svg)](https://github.com/samchon/openapi/actions?query=workflow%3Abuild)\n[![API Documents](https://img.shields.io/badge/API-Documents-forestgreen)](https://samchon.github.io/openapi/api/)\n[![Discord Badge](https://img.shields.io/badge/discord-samchon-d91965?style=flat\u0026labelColor=5866f2\u0026logo=discord\u0026logoColor=white\u0026link=https://discord.gg/E94XhzrUCZ)](https://discord.gg/E94XhzrUCZ)\n\nOpenAPI definitions, converters and LLM function calling application composer.\n\n`@samchon/openapi` is a collection of OpenAPI types for every versions, and converters for them. In the OpenAPI types, there is an \"emended\" OpenAPI v3.1 specification, which has removed ambiguous and duplicated expressions for the clarity. Every conversions are based on the emended OpenAPI v3.1 specification.\n\n  1. [Swagger v2.0](https://github.com/samchon/openapi/blob/master/src/SwaggerV2.ts)\n  2. [OpenAPI v3.0](https://github.com/samchon/openapi/blob/master/src/OpenApiV3.ts)\n  3. [OpenAPI v3.1](https://github.com/samchon/openapi/blob/master/src/OpenApiV3_1.ts)\n  4. [**OpenAPI v3.1 emended**](https://github.com/samchon/openapi/blob/master/src/OpenApi.ts)\n\n`@samchon/openapi` also provides LLM (Large Language Model) function calling application composer from the OpenAPI document with many strategies. With the [`HttpLlm`](https://samchon.github.io/openapi/api/modules/HttpLlm.html) module, you can perform the LLM function calling extremely easily just by delivering the OpenAPI (Swagger) document.\n\n  - [`HttpLlm.application()`](https://samchon.github.io/openapi/api/functions/HttpLlm.application.html)\n  - [`IHttpLlmApplication\u003cModel\u003e`](https://samchon.github.io/openapi/api/interfaces/IHttpLlmApplication-1.html)\n  - [`IHttpLlmFunction\u003cModel\u003e`](https://samchon.github.io/openapi/api/interfaces/IHttpLlmFunction-1.html)\n  - Supported schemas\n    - [`IChatGptSchema`](https://samchon.github.io/openapi/api/types/IChatGptSchema-1.html): OpenAI ChatGPT\n    - [`IClaudeSchema`](https://samchon.github.io/openapi/api/types/IClaudeSchema-1.html): Anthropic Claude\n    - [`IGeminiSchema`](https://samchon.github.io/openapi/api/types/IGeminiSchema-1.html): Google Gemini\n    - [`ILlamaSchema`](https://samchon.github.io/openapi/api/types/ILlamaSchema-1.html): Meta Llama\n  - Midldle layer schemas\n    - [`ILlmSchemaV3`](https://samchon.github.io/openapi/api/types/ILlmSchemaV3-1.html): middle layer based on OpenAPI v3.0 specification\n    - [`ILlmSchemaV3_1`](https://samchon.github.io/openapi/api/types/ILlmSchemaV3_1-1.html): middle layer based on OpenAPI v3.1 specification\n\n\u003e https://github.com/user-attachments/assets/e1faf30b-c703-4451-b68b-2e7a8170bce5\n\u003e\n\u003e Demonstration video composing A.I. chatbot with `@samchon/openapi` and [`agentica`](https://github.com/wrtnlabs/agentica)\n\u003e\n\u003e - Shopping A.I. Chatbot Application: https://nestia.io/chat/shopping\n\u003e - Shopping Backend Repository: https://github.com/samchon/shopping-backend\n\u003e - Shopping Swagger Document (`@nestia/editor`): [https://nestia.io/editor/?url=...](https://nestia.io/editor/?simulate=true\u0026e2e=true\u0026url=https%3A%2F%2Fraw.githubusercontent.com%2Fsamchon%2Fshopping-backend%2Frefs%2Fheads%2Fmaster%2Fpackages%2Fapi%2Fswagger.json)\n\n\n\n\n## Setup\n```bash\nnpm install @samchon/openapi\n```\n\nJust install by `npm i @samchon/openapi` command.\n\nHere is an example code utilizing the `@samchon/openapi` for LLM function calling purpose. \n\n```typescript\nimport {\n  HttpLlm,\n  IChatGptSchema,\n  IHttpLlmApplication,\n  IHttpLlmFunction,\n  OpenApi,\n  OpenApiV3,\n  OpenApiV3_1,\n  SwaggerV2,\n} from \"@samchon/openapi\";\nimport fs from \"fs\";\nimport typia from \"typia\";\n\nconst main = async (): Promise\u003cvoid\u003e =\u003e {\n  // read swagger document and validate it\n  const swagger:\n    | SwaggerV2.IDocument\n    | OpenApiV3.IDocument\n    | OpenApiV3_1.IDocument = JSON.parse(\n    await fs.promises.readFile(\"swagger.json\", \"utf8\"),\n  );\n  typia.assert(swagger); // recommended\n\n  // convert to emended OpenAPI document,\n  // and compose LLM function calling application\n  const document: OpenApi.IDocument = OpenApi.convert(swagger);\n  const application: IHttpLlmApplication\u003c\"chatgpt\"\u003e = HttpLlm.application({\n    model: \"chatgpt\",\n    document,\n  });\n\n  // Let's imagine that LLM has selected a function to call\n  const func: IHttpLlmFunction\u003c\"chatgpt\"\u003e | undefined = \n    application.functions.find(\n      // (f) =\u003e f.name === \"llm_selected_function_name\"\n      (f) =\u003e f.path === \"/bbs/articles\" \u0026\u0026 f.method === \"post\",\n    );\n  if (func === undefined) throw new Error(\"No matched function exists.\");\n\n  // actual execution is by yourself\n  const article = await HttpLlm.execute({\n    connection: {\n      host: \"http://localhost:3000\",\n    },\n    application,\n    function: func,\n    arguments: {\n      // arguments composed by LLM \n      body: {\n        title: \"Hello, world!\",\n        body: \"Let's imagine that this argument is composed by LLM.\",\n        thumbnail: null,\n      },\n    },\n  });\n  console.log(\"article\", article);\n};\nmain().catch(console.error);\n```\n\n\n\n\n## OpenAPI Definitions\n```mermaid\nflowchart\n  v20(Swagger v2.0) --upgrades--\u003e emended[[\"\u003cb\u003e\u003cu\u003eOpenAPI v3.1 (emended)\u003c/u\u003e\u003c/b\u003e\"]]\n  v30(OpenAPI v3.0) --upgrades--\u003e emended\n  v31(OpenAPI v3.1) --emends--\u003e emended\n  emended --downgrades--\u003e v20d(Swagger v2.0)\n  emended --downgrades--\u003e v30d(Swagger v3.0)\n```\n\n`@samchon/openapi` support every versions of OpenAPI specifications with detailed TypeScript types.\n\n  - [Swagger v2.0](https://github.com/samchon/openapi/blob/master/src/SwaggerV2.ts)\n  - [OpenAPI v3.0](https://github.com/samchon/openapi/blob/master/src/OpenApiV3.ts)\n  - [OpenAPI v3.1](https://github.com/samchon/openapi/blob/master/src/OpenApiV3_1.ts)\n  - [**OpenAPI v3.1 emended**](https://github.com/samchon/openapi/blob/master/src/OpenApi.ts)\n\nAlso, `@samchon/openapi` provides \"emended OpenAPI v3.1 definition\" which has removed ambiguous and duplicated expressions for clarity. It has emended original OpenAPI v3.1 specification like above. You can compose the \"emended OpenAPI v3.1 document\" by calling the `OpenApi.convert()` function. \n\n  - Operation\n    - Merge `OpenApiV3_1.IPathItem.parameters` to `OpenApi.IOperation.parameters`\n    - Resolve references of `OpenApiV3_1.IOperation` members\n    - Escape references of `OpenApiV3_1.IComponents.examples`\n  - JSON Schema\n    - Decompose mixed type: `OpenApiV3_1.IJsonSchema.IMixed`\n    - Resolve nullable property: `OpenApiV3_1.IJsonSchema.__ISignificant.nullable`\n    - Array type utilizes only single `OpenAPI.IJsonSchema.IArray.items`\n    - Tuple type utilizes only `OpenApi.IJsonSchema.ITuple.prefixItems`\n    - Merge `OpenApiV3_1.IJsonSchema.IAnyOf` to `OpenApi.IJsonSchema.IOneOf`\n    - Merge `OpenApiV3_1.IJsonSchema.IRecursiveReference` to `OpenApi.IJsonSchema.IReference`\n    - Merge `OpenApiV3_1.IJsonSchema.IAllOf` to `OpenApi.IJsonSchema.IObject`\n\nConversions to another version's OpenAPI document is also based on the \"emended OpenAPI v3.1 specification\" like above diagram. You can do it through `OpenApi.downgrade()` function. Therefore, if you want to convert Swagger v2.0 document to OpenAPI v3.0 document, you have to call two functions; `OpenApi.convert()` and then `OpenApi.downgrade()`.\n\nAt last, if you utilize `typia` library with `@samchon/openapi` types, you can validate whether your OpenAPI document is following the standard specification or not. Just visit one of below playground links, and paste your OpenAPI document URL address. This validation strategy would be superior than any other OpenAPI validator libraries.\n\n  - Playground Links\n    - [💻 Type assertion](https://typia.io/playground/?script=JYWwDg9gTgLgBAbzgeTAUwHYEEzADQrra4BqAzAapjsOQPoCMBAygO4CGA5p2lCQExwAvnABmUCCDgAiAAIBndiADGACwgYA9BCLtc0gNwAoUJFhwYAT1zsxEqdKs3DRo8o3z4IdsAxwAvHDs8pYYynAAFACUAFxwAAr2wPJoADwAbhDAACYAfAH5CEZwcJqacADiAKIAKnAAmsgAqgBKKPFVAHJY8QCScAAiyADCTQCyXTXFcO4YnnBQaPKQc2hxLUsrKQFBHMDwomgwahHTJdKqMDBg8jFlUOysAHSc+6oArgBG7ylQszCYGBPdwgTSKFTqLQ6TB6YCabyeXiaNAADyUYAANktNOkyE8AAzaXTAJ4AK3kGmk0yixhKs3m2QgyneIEBcXYGEsO0ePngi2WHjQZIpGGixmmZTgNXqHTgWGYzCqLRqvWQnWmTmA7CewV+MAq73YUGyqTOcAAPoRqKQyIwnr0BkyWYCzZaqMRaHiHU7WRgYK64GwuDw+Px7Y7mb7-SVchFGZHATTXCVJcM1SQlXUasg4FUJp0BlUBtN6fA0L7smhsnF3TRwz7ATta7hgRp0rwYHGG36k3SPBAsU9fKIIBFy5hK9kk0JjN5fNFgexjqoIvSB0LeBIoDSgA)\n    - [💻 Detailed validation](https://typia.io/playground/?script=JYWwDg9gTgLgBAbzgeTAUwHYEEzADQrra4BqAzAapjsOQPoCMBAygO4CGA5p2lCQExwAvnABmUCCDgAiAAIBndiADGACwgYA9BCLtc0gNwAoUJFhwYAT1zsxEqdKs3DRo8o3z4IdsAxwAvHDs8pYYynAAFACUAFxwAAr2wPJoADwAbhDAACYAfAH5CEZwcJqacADiAKIAKnAAmsgAqgBKKPFVAHJY8QCScAAiyADCTQCyXTXFcO4YnnBQaPKQc2hxLUsrKQFBHMDwomgwahHTJdKqMDBg8jFlUOysAHSc+6oArgBG7ylQszCYGBPdwgTSKFTqLQ6TB6YCabyeXiaNAADyUYAANktNOkyE8AAzaXTAJ4AK3kGmk0yixhKs3m2QgyneIEBcXYGEsO0ePngi2WHjQZIpGGixmmZTgNXqHTgJCwABlegMsDVeshOtN6Xylu8MfBAk5gOwnul2BicuwAakznAAD6EaikMiMJ7KpkswG2h1UYi0PHu5msjAwb1wNhcHh8fhugYe4Ohkq5CKMoOAmnTYCiSL8vVA+TvZTKJbyAL+QKic0pKKIW30iBYp6+UQQCK5-VPXgSKDyDMlEqLGDvKAYWnCVwlSXDDUkKotOo1ZBwKoTToDKoDLUeeBoYPZNDZOK+mix+OAnbH3DAjTpXgwFNnkN9mYeBtC5ut3eYffZDNCYzeL40TAlaJz1o2XbQDSQA)\n\n```typescript\nimport { OpenApi, OpenApiV3, OpenApiV3_1, SwaggerV2 } from \"@samchon/openapi\";\nimport typia from \"typia\";\n \nconst main = async (): Promise\u003cvoid\u003e =\u003e {\n  // GET YOUR OPENAPI DOCUMENT\n  const response: Response = await fetch(\n    \"https://raw.githubusercontent.com/samchon/openapi/master/examples/v3.0/openai.json\"\n  );\n  const document: any = await response.json();\n \n  // TYPE VALIDATION\n  const result = typia.validate\u003c\n    | OpenApiV3_1.IDocument\n    | OpenApiV3.IDocument\n    | SwaggerV2.IDocument\n  \u003e(document);\n  if (result.success === false) {\n    console.error(result.errors);\n    return;\n  }\n \n  // CONVERT TO EMENDED\n  const emended: OpenApi.IDocument = OpenApi.convert(document);\n  console.info(emended);\n};\nmain().catch(console.error);\n```\n\n\n\n\n## LLM Function Calling\n### Preface\n```mermaid\nflowchart TD\n  subgraph \"OpenAPI Specification\"\n    v20(\"Swagger v2.0\") --upgrades--\u003e emended[[\"OpenAPI v3.1 (emended)\"]]\n    v30(\"OpenAPI v3.0\") --upgrades--\u003e emended\n    v31(\"OpenAPI v3.1\") --emends--\u003e emended\n  end\n  subgraph \"OpenAPI Generator\"\n    emended --normalizes--\u003e migration[[\"Migration Schema\"]]\n    migration --\"Artificial Intelligence\"--\u003e lfc{{\"\u003cb\u003e\u003cu\u003eLLM Function Calling\u003c/b\u003e\u003c/u\u003e\"}}\n    lfc --\"OpenAI\"--\u003e chatgpt(\"ChatGPT\")\n    lfc --\"Anthropic\"--\u003e claude(\"Claude\")\n    lfc --\"Google\"--\u003e gemini(\"Gemini\")\n    lfc --\"Meta\"--\u003e llama(\"Llama\")\n  end\n```\n\nLLM function calling application from OpenAPI document.\n\n`@samchon/openapi` provides LLM (Large Language Model) function calling application from the \"emended OpenAPI v3.1 document\". Therefore, if you have any HTTP backend server and succeeded to build an OpenAPI document, you can easily make the A.I. chatbot application.\n\nIn the A.I. chatbot, LLM will select proper function to remotely call from the conversations with user, and fill arguments of the function automatically. If you actually execute the function call through the [`HttpLlm.execute()`](https://samchon.github.io/openapi/api/functions/HttpLlm.execute.html) function, it is the \"LLM function call.\"\n\nLet's enjoy the fantastic LLM function calling feature very easily with `@samchon/openapi`.\n\n  - Application\n    - [`HttpLlm.application()`](https://samchon.github.io/openapi/api/functions/HttpLlm.application.html)\n    - [`IHttpLlmApplication`](https://samchon.github.io/openapi/api/interfaces/IHttpLlmApplication-1.html)\n    - [`IHttpLlmFunction`](https://samchon.github.io/openapi/api/interfaces/IHttpLlmFunction-1.html)\n  - Schemas\n    - [`IChatGptSchema`](https://samchon.github.io/openapi/api/types/IChatGptSchema-1.html): OpenAI ChatGPT\n    - [`IClaudeSchema`](https://samchon.github.io/openapi/api/types/IClaudeSchema-1.html): Anthropic Claude\n    - [`IGeminiSchema`](https://samchon.github.io/openapi/api/types/IGeminiSchema-1.html): Google Gemini\n    - [`ILlamaSchema`](https://samchon.github.io/openapi/api/types/ILlamaSchema-1.html): Meta Llama\n    - [`ILlmSchemaV3`](https://samchon.github.io/openapi/api/types/ILlmSchemaV3-1.html): middle layer based on OpenAPI v3.0 specification\n    - [`ILlmSchemaV3_1`](https://samchon.github.io/openapi/api/types/ILlmSchemaV3_1-1.html): middle layer based on OpenAPI v3.1 specification\n  - Type Checkers\n    - [`ChatGptTypeChecker`](https://github.com/samchon/openapi/blob/master/src/utils/ChatGptTypeChecker.ts)\n    - [`ClaudeTypeChecker`](https://github.com/samchon/openapi/blob/master/src/utils/ClaudeTypeChecker.ts)\n    - [`GeminiTypeChecker`](https://github.com/samchon/openapi/blob/master/src/utils/GeminiTypeChecker.ts)\n    - [`LlamaTypeChecker`](https://github.com/samchon/openapi/blob/master/src/utils/LlamaTypeChecker.ts)\n    - [`LlmTypeCheckerV3`](https://github.com/samchon/openapi/blob/master/src/utils/LlmTypeCheckerV3.ts)\n    - [`LlmTypeCheckerV3_1`](https://github.com/samchon/openapi/blob/master/src/utils/LlmTypeCheckerV3_1.ts)\n\n\u003e [!NOTE]\n\u003e\n\u003e You also can compose [`ILlmApplication`](https://samchon.github.io/openapi/api/interfaces/ILlmApplication-1.html) from a class type with `typia`.\n\u003e\n\u003e https://typia.io/docs/llm/application\n\u003e\n\u003e ```typescript\n\u003e import { ILlmApplication } from \"@samchon/openapi\";\n\u003e import typia from \"typia\";\n\u003e\n\u003e const app: ILlmApplication\u003c\"chatgpt\"\u003e =\n\u003e   typia.llm.application\u003cYourClassType, \"chatgpt\"\u003e();\n\u003e ```\n\n\u003e [!TIP]\n\u003e\n\u003e LLM selects proper function and fill arguments.\n\u003e \n\u003e In nowadays, most LLM (Large Language Model) like OpenAI are supporting \"function calling\" feature. The \"LLM function calling\" means that LLM automatically selects a proper function and fills parameter values from conversation with the user (may by chatting text).\n\u003e \n\u003e https://platform.openai.com/docs/guides/function-calling\n\n### Execution\nActual function call execution is by yourself.\n\nLLM (Large Language Model) providers like OpenAI selects a proper function to call from the conversations with users, and fill arguments of it. However, function calling feature supported by LLM providers do not perform the function call execution. The actual execution responsibility is on you.\n\nIn `@samchon/openapi`, you can execute the LLM function calling by [`HttpLlm.execute()`](https://samchon.github.io/openapi/api/functions/HttpLlm.execute.html) (or [`HttpLlm.propagate()`](https://samchon.github.io/openapi/api/functions/HttpLlm.propagate.html)) function. Here is an example code executing the LLM function calling through the [`HttpLlm.execute()`](https://samchon.github.io/openapi/api/functions/HttpLlm.execute.html) function. As you can see, to execute the LLM function call, you have to deliver these information:\n\n  - Connection info to the HTTP server\n  - Application of the LLM function calling\n  - LLM function schema to call\n  - Arguments for the function call (maybe composed by LLM)\n\nHere is the example code executing the LLM function call with `@samchon/openapi`.\n\n  - Example Code: [`test/examples/chatgpt-function-call-to-sale-create.ts`](https://github.com/samchon/openapi/blob/master/test/examples/chatgpt-function-call-to-sale-create.ts)\n  - Prompt describing the produc to create:  [`Microsoft Surface Pro 9`](https://github.com/samchon/openapi/blob/master/examples/function-calling/prompts/microsoft-surface-pro-9.md)\n  - Result of the Function Calling: [`examples/arguments/chatgpt.microsoft-surface-pro-9.input.json`](https://github.com/samchon/openapi/blob/master/examples/function-calling/arguments/chatgpt.microsoft-surface-pro-9.input.json)\n\n```typescript\nimport {\n  HttpLlm,\n  IChatGptSchema,\n  IHttpLlmApplication,\n  IHttpLlmFunction,\n  OpenApi,\n  OpenApiV3,\n  OpenApiV3_1,\n  SwaggerV2,\n} from \"@samchon/openapi\";\nimport OpenAI from \"openai\";\nimport typia from \"typia\";\n\nconst main = async (): Promise\u003cvoid\u003e =\u003e {\n  // Read swagger document and validate it\n  const swagger:\n    | SwaggerV2.IDocument\n    | OpenApiV3.IDocument\n    | OpenApiV3_1.IDocument = JSON.parse(\n    await fetch(\n      \"https://github.com/samchon/shopping-backend/blob/master/packages/api/swagger.json\",\n    ).then((r) =\u003e r.json()),\n  );\n  typia.assert(swagger); // recommended\n\n  // convert to emended OpenAPI document,\n  // and compose LLM function calling application\n  const document: OpenApi.IDocument = OpenApi.convert(swagger);\n  const application: IHttpLlmApplication\u003c\"chatgpt\"\u003e = HttpLlm.application({\n    model: \"chatgpt\",\n    document,\n  });\n\n  // Let's imagine that LLM has selected a function to call\n  const func: IHttpLlmFunction\u003c\"chatgpt\"\u003e | undefined =\n    application.functions.find(\n      // (f) =\u003e f.name === \"llm_selected_function_name\"\n      (f) =\u003e f.path === \"/shoppings/sellers/sale\" \u0026\u0026 f.method === \"post\",\n    );\n  if (func === undefined) throw new Error(\"No matched function exists.\");\n\n  // Get arguments by ChatGPT function calling\n  const client: OpenAI = new OpenAI({\n    apiKey: \"\u003cYOUR_OPENAI_API_KEY\u003e\",\n  });\n  const completion: OpenAI.ChatCompletion =\n    await client.chat.completions.create({\n      model: \"gpt-4o\",\n      messages: [\n        {\n          role: \"assistant\",\n          content:\n            \"You are a helpful customer support assistant. Use the supplied tools to assist the user.\",\n        },\n        {\n          role: \"user\",\n          content: \"\u003cDESCRIPTION ABOUT THE SALE\u003e\",\n          // https://github.com/samchon/openapi/blob/master/examples/function-calling/prompts/microsoft-surface-pro-9.md\n        },\n      ],\n      tools: [\n        {\n          type: \"function\",\n          function: {\n            name: func.name,\n            description: func.description,\n            parameters: func.parameters as Record\u003cstring, any\u003e,\n          },\n        },\n      ],\n    });\n  const toolCall: OpenAI.ChatCompletionMessageToolCall =\n    completion.choices[0].message.tool_calls![0];\n\n  // Actual execution by yourself\n  const article = await HttpLlm.execute({\n    connection: {\n      host: \"http://localhost:37001\",\n    },\n    application,\n    function: func,\n    input: JSON.parse(toolCall.function.arguments),\n  });\n  console.log(\"article\", article);\n};\nmain().catch(console.error);\n```\n\n### Validation Feedback\n```typescript\nimport { IHttpLlmFunction, IValidation } from \"@samchon/openapi\";\nimport { FunctionCall } from \"pseudo\";\n\nexport const correctFunctionCall = (p: {\n  call: FunctionCall;\n  functions: Array\u003cIHttpLlmFunction\u003c\"chatgpt\"\u003e\u003e;\n  retry: (reason: string, errors?: IValidation.IError[]) =\u003e Promise\u003cunknown\u003e;\n}): Promise\u003cunknown\u003e =\u003e {\n  // FIND FUNCTION\n  const func: IHttpLlmFunction\u003c\"chatgpt\"\u003e | undefined =\n    p.functions.find((f) =\u003e f.name === p.call.name);\n  if (func === undefined) {\n    // never happened in my experience\n    return p.retry(\n      \"Unable to find the matched function name. Try it again.\",\n    );\n  }\n\n  // VALIDATE\n  const result: IValidation\u003cunknown\u003e = func.validate(p.call.arguments);\n  if (result.success === false) {\n    // 1st trial: 30% (gpt-4o-mini in shopping mall chatbot)\n    // 2nd trial with validation feedback: 99%\n    // 3nd trial with validation feedback again: never have failed\n    return p.retry(\n      \"Type errors are detected. Correct it through validation errors\",\n      {\n        errors: result.errors,\n      },\n    );\n  }\n  return result.data;\n}\n```\n\nIs LLM Function Calling perfect? No, absolutely not.\n\nLLM (Large Language Model) service vendor like OpenAI takes a lot of type level mistakes when composing the arguments of function calling or structured output. Even though target schema is super simple like `Array\u003cstring\u003e` type, LLM often fills it just by a `string` typed value.\n\nIn my experience, OpenAI `gpt-4o-mini` (`8b` parameters) is taking about 70% of type level mistakes when filling the arguments of function calling to Shopping Mall service. To overcome the imperfection of such LLM function calling, `@samchon/openapi` supports validation feedback strategy.\n\nThe key concept of validation feedback strategy is, let LLM function calling to construct invalid typed arguments first, and informing detailed type errors to the LLM, so that induce LLM to emend the wrong typed arguments at the next turn by using `IHttpLlmFunction\u003cModel\u003e.validate()` function.\n\nEmbedded validator function in `IHttpLlmFunction\u003cModel\u003e.validate()` is exactly same with [`typia.validate\u003cT\u003e()`](https://typia.io/docs/validators/validate) function, so that detailed and accurate than any other validators like below. By such validation feedback strategy, 30% success rate of the 1st function calling trial has been increased to 99% success rate of the 2nd function calling trial. And have never failed from the 3rd trial.\n\nComponents               | `typia` | `TypeBox` | `ajv` | `io-ts` | `zod` | `C.V.`\n-------------------------|--------|-----------|-------|---------|-------|------------------\n**Easy to use**          | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ \n[Object (simple)](https://github.com/samchon/typia/blob/master/test/src/structures/ObjectSimple.ts)          | ✔ | ✔ | ✔ | ✔ | ✔ | ✔\n[Object (hierarchical)](https://github.com/samchon/typia/blob/master/test/src/structures/ObjectHierarchical.ts)    | ✔ | ✔ | ✔ | ✔ | ✔ | ✔\n[Object (recursive)](https://github.com/samchon/typia/blob/master/test/src/structures/ObjectRecursive.ts)       | ✔ | ❌ | ✔ | ✔ | ✔ | ✔ | ✔\n[Object (union, implicit)](https://github.com/samchon/typia/blob/master/test/src/structures/ObjectUnionImplicit.ts) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌\n[Object (union, explicit)](https://github.com/samchon/typia/blob/master/test/src/structures/ObjectUnionExplicit.ts) | ✔ | ✔ | ✔ | ✔ | ✔ | ❌\n[Object (additional tags)](https://github.com/samchon/typia/#comment-tags)        | ✔ | ✔ | ✔ | ✔ | ✔ | ✔\n[Object (template literal types)](https://github.com/samchon/typia/blob/master/test/src/structures/TemplateUnion.ts) | ✔ | ✔ | ✔ | ❌ | ❌ | ❌\n[Object (dynamic properties)](https://github.com/samchon/typia/blob/master/test/src/structures/DynamicTemplate.ts) | ✔ | ✔ | ✔ | ❌ | ❌ | ❌\n[Array (rest tuple)](https://github.com/samchon/typia/blob/master/test/src/structures/TupleRestAtomic.ts) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌\n[Array (hierarchical)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayHierarchical.ts)     | ✔ | ✔ | ✔ | ✔ | ✔ | ✔\n[Array (recursive)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayRecursive.ts)        | ✔ | ✔ | ✔ | ✔ | ✔ | ❌\n[Array (recursive, union)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayRecursiveUnionExplicit.ts) | ✔ | ✔ | ❌ | ✔ | ✔ | ❌\n[Array (R+U, implicit)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayRecursiveUnionImplicit.ts)    | ✅ | ❌ | ❌ | ❌ | ❌ | ❌\n[Array (repeated)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayRepeatedNullable.ts)    | ✅ | ❌ | ❌ | ❌ | ❌ | ❌\n[Array (repeated, union)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayRepeatedUnionWithTuple.ts)    | ✅ | ❌ | ❌ | ❌ | ❌ | ❌\n[**Ultimate Union Type**](https://github.com/samchon/typia/blob/master/test/src/structures/UltimateUnion.ts)  | ✅ | ❌ | ❌ | ❌ | ❌ | ❌\n\n\u003e `C.V.` means `class-validator`\n\n### Separation\nArguments from both Human and LLM sides.\n\nWhen composing parameter arguments through the LLM (Large Language Model) function calling, there can be a case that some parameters (or nested properties) must be composed not by LLM, but by Human. File uploading feature, or sensitive information like secret key (password) cases are the representative examples.\n\nIn that case, you can configure the LLM function calling schemas to exclude such Human side parameters (or nested properties) by `IHttpLlmApplication.options.separate` property. Instead, you have to merge both Human and LLM composed parameters into one by calling the [`HttpLlm.mergeParameters()`](https://samchon.github.io/openapi/api/functions/HttpLlm.mergeParameters.html) before the LLM function call execution of [`HttpLlm.execute()`](https://samchon.github.io/openapi/api/functions/HttpLlm.execute.html) function.\n\nHere is the example code separating the file uploading feature from the LLM function calling schema, and combining both Human and LLM composed parameters into one before the LLM function call execution.\n\n  - Example Code: [`test/examples/claude-function-call-separate-to-sale-create.ts`](https://github.com/samchon/openapi/blob/master/test/examples/claude-function-call-separate-to-sale-create.ts.ts)\n  - Prompt describing the produc to create:  [`Microsoft Surpace Pro 9`](https://github.com/samchon/openapi/blob/master/examples/function-calling/prompts/microsoft-surface-pro-9.md)\n  - Result of the Function Calling: [`examples/arguments/claude.microsoft-surface-pro-9.input.json`](https://github.com/samchon/openapi/blob/master/examples/function-calling/arguments/claude.microsoft-surface-pro-9.input.json)\n\n```typescript\nimport Anthropic from \"@anthropic-ai/sdk\";\nimport {\n  ClaudeTypeChecker,\n  HttpLlm,\n  IClaudeSchema,\n  IHttpLlmApplication,\n  IHttpLlmFunction,\n  OpenApi,\n  OpenApiV3,\n  OpenApiV3_1,\n  SwaggerV2,\n} from \"@samchon/openapi\";\nimport typia from \"typia\";\n\nconst main = async (): Promise\u003cvoid\u003e =\u003e {\n  // Read swagger document and validate it\n  const swagger:\n    | SwaggerV2.IDocument\n    | OpenApiV3.IDocument\n    | OpenApiV3_1.IDocument = JSON.parse(\n    await fetch(\n      \"https://github.com/samchon/shopping-backend/blob/master/packages/api/swagger.json\",\n    ).then((r) =\u003e r.json()),\n  );\n  typia.assert(swagger); // recommended\n\n  // convert to emended OpenAPI document,\n  // and compose LLM function calling application\n  const document: OpenApi.IDocument = OpenApi.convert(swagger);\n  const application: IHttpLlmApplication\u003c\"claude\"\u003e = HttpLlm.application({\n    model: \"claude\",\n    document,\n    options: {\n      reference: true,\n      separate: (schema) =\u003e\n        ClaudeTypeChecker.isString(schema) \u0026\u0026\n        !!schema.contentMediaType?.startsWith(\"image\"),\n    },\n  });\n\n  // Let's imagine that LLM has selected a function to call\n  const func: IHttpLlmFunction\u003c\"claude\"\u003e | undefined =\n    application.functions.find(\n      // (f) =\u003e f.name === \"llm_selected_function_name\"\n      (f) =\u003e f.path === \"/shoppings/sellers/sale\" \u0026\u0026 f.method === \"post\",\n    );\n  if (func === undefined) throw new Error(\"No matched function exists.\");\n\n  // Get arguments by ChatGPT function calling\n  const client: Anthropic = new Anthropic({\n    apiKey: \"\u003cYOUR_ANTHROPIC_API_KEY\u003e\",\n  });\n  const completion: Anthropic.Message = await client.messages.create({\n    model: \"claude-3-5-sonnet-latest\",\n    max_tokens: 8_192,\n    messages: [\n      {\n        role: \"assistant\",\n        content:\n          \"You are a helpful customer support assistant. Use the supplied tools to assist the user.\",\n      },\n      {\n        role: \"user\",\n        content: \"\u003cDESCRIPTION ABOUT THE SALE\u003e\",\n        // https://github.com/samchon/openapi/blob/master/examples/function-calling/prompts/microsoft-surface-pro-9.md\n      },\n    ],\n    tools: [\n      {\n        name: func.name,\n        description: func.description,\n        input_schema: func.separated!.llm as any,\n      },\n    ],\n  });\n  const toolCall: Anthropic.ToolUseBlock = completion.content.filter(\n    (c) =\u003e c.type === \"tool_use\",\n  )[0]!;\n\n  // Actual execution by yourself\n  const article = await HttpLlm.execute({\n    connection: {\n      host: \"http://localhost:37001\",\n    },\n    application,\n    function: func,\n    input: HttpLlm.mergeParameters({\n      function: func,\n      llm: toolCall.input as any,\n      human: {\n        // Human composed parameter values\n        content: {\n          files: [],\n          thumbnails: [\n            {\n              name: \"thumbnail\",\n              extension: \"jpeg\",\n              url: \"https://serpapi.com/searches/673d3a37e45f3316ecd8ab3e/images/1be25e6e2b1fb7509f1af89c326cb41749301b94375eb5680b9bddcdf88fabcb.jpeg\",\n            },\n            // ...\n          ],\n        },\n      },\n    }),\n  });\n  console.log(\"article\", article);\n};\nmain().catch(console.error);\n```\n\n\n\n\n## Agentica\n![agentica-conceptual-diagram](https://github.com/user-attachments/assets/d7ebbd1f-04d3-4b0d-9e2a-234e29dd6c57)\n\nhttps://github.com/wrtnlabs/agentica\n\n`agentica` is the simplest **Agentic AI** library, specialized in **LLM Function Calling** with `@samchon/openapi`.\n\nWith it, you don't need to compose complicate agent graph or workflow. Instead, just deliver **Swagger/OpenAPI** documents or **TypeScript class** types linearly to the `agentica`. Then `agentica` will do everything with the function calling.\n\nLook at the below demonstration, and feel how `agentica` is easy and powerful combining with `@samchon/openapi`.\n\n```typescript\nimport { Agentica } from \"@agentica/core\";\nimport typia from \"typia\";\n\nconst agent = new Agentica({\n  controllers: [\n    await fetch(\n      \"https://shopping-be.wrtn.ai/editor/swagger.json\",\n    ).then(r =\u003e r.json()),\n    typia.llm.application\u003cShoppingCounselor\u003e(),\n    typia.llm.application\u003cShoppingPolicy\u003e(),\n    typia.llm.application\u003cShoppingSearchRag\u003e(),\n  ],\n});\nawait agent.conversate(\"I wanna buy MacBook Pro\");\n```\n","funding_links":["https://github.com/sponsors/samchon"],"categories":["others","TypeScript"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsamchon%2Fopenapi","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsamchon%2Fopenapi","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsamchon%2Fopenapi/lists"}