{"id":13641586,"url":"https://github.com/marcominerva/ChatGptNet","last_synced_at":"2025-04-20T11:31:25.705Z","repository":{"id":142143049,"uuid":"612105276","full_name":"marcominerva/ChatGptNet","owner":"marcominerva","description":"A ChatGPT integration library for .NET, supporting both OpenAI and Azure OpenAI Service","archived":false,"fork":false,"pushed_at":"2024-10-15T07:32:07.000Z","size":4634,"stargazers_count":303,"open_issues_count":2,"forks_count":36,"subscribers_count":17,"default_branch":"master","last_synced_at":"2024-10-15T07:54:33.299Z","etag":null,"topics":["azure-openai","azure-openai-api","chatgpt","csharp","dotnet","embedding","embeddings","embeddings-similarity","hacktoberfest","net","openai","openai-api"],"latest_commit_sha":null,"homepage":"","language":"C#","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/marcominerva.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-03-10T07:58:24.000Z","updated_at":"2024-10-15T07:31:43.000Z","dependencies_parsed_at":"2024-02-12T15:16:22.585Z","dependency_job_id":"626c39d9-6a73-4642-b8e3-afe47576b33e","html_url":"https://github.com/marcominerva/ChatGptNet","commit_stats":null,"previous_names":[],"tags_count":52,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/marcominerva%2FChatGptNet","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/marcominerva%2FChatGptNet/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/marcominerva%2FChatGptNet/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/marcominerva%2FChatGptNet/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/marcominerva","download_url":"https://codeload.github.com/marcominerva/ChatGptNet/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":223827436,"owners_count":17209794,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["azure-openai","azure-openai-api","chatgpt","csharp","dotnet","embedding","embeddings","embeddings-similarity","hacktoberfest","net","openai","openai-api"],"created_at":"2024-08-02T01:01:22.016Z","updated_at":"2024-11-09T12:30:29.780Z","avatar_url":"https://github.com/marcominerva.png","language":"C#","readme":"# ChatGPT for .NET\n\n[![Lint Code Base](https://github.com/marcominerva/ChatGptNet/actions/workflows/linter.yml/badge.svg)](https://github.com/marcominerva/ChatGptNet/actions/workflows/linter.yml)\n[![CodeQL](https://github.com/marcominerva/ChatGptNet/actions/workflows/codeql.yml/badge.svg)](https://github.com/marcominerva/ChatGptNet/actions/workflows/codeql.yml)\n[![NuGet](https://img.shields.io/nuget/v/ChatGptNet.svg?style=flat-square)](https://www.nuget.org/packages/ChatGptNet)\n[![Nuget](https://img.shields.io/nuget/dt/ChatGptNet)](https://www.nuget.org/packages/ChatGptNet)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://github.com/marcominerva/ChatGptNet/blob/master/LICENSE)\n\nA ChatGPT integration library for .NET, supporting both OpenAI and Azure OpenAI Service.\n\n## Installation\n\nThe library is available on [NuGet](https://www.nuget.org/packages/ChatGptNet). Just search for *ChatGptNet* in the **Package Manager GUI** or run the following command in the **.NET CLI**:\n\n```shell\ndotnet add package ChatGptNet\n```\n\n## Configuration\n\nRegister ChatGPT service at application startup:\n\n```csharp\nbuilder.Services.AddChatGpt(options =\u003e\n{\n    // OpenAI.\n    //options.UseOpenAI(apiKey: \"\", organization: \"\");\n\n    // Azure OpenAI Service.\n    //options.UseAzure(resourceName: \"\", apiKey: \"\", authenticationType: AzureAuthenticationType.ApiKey);\n\n    options.DefaultModel = \"my-model\";\n    options.DefaultEmbeddingModel = \"text-embedding-ada-002\";\n    options.MessageLimit = 16;  // Default: 10\n    options.MessageExpiration = TimeSpan.FromMinutes(5);    // Default: 1 hour\n    options.DefaultParameters = new ChatGptParameters\n    {\n        MaxTokens = 800,\n        //MaxCompletionTokens = 800,  // o1 series models support this property instead of MaxTokens\n        Temperature = 0.7\n    };\n});\n```\n\n**ChatGptNet** supports both OpenAI and Azure OpenAI Service, so it is necessary to set the correct configuration settings based on the chosen provider:\n\n#### OpenAI (UseOpenAI)\n\n- _ApiKey_: it is available in the [User settings](https://platform.openai.com/account/api-keys) page of the OpenAI account (required).\n- _Organization_: for users who belong to multiple organizations, you can also specify which organization is used. Usage from these API requests will count against the specified organization's subscription quota (optional).\n\n##### Azure OpenAI Service (UseAzure)\n\n- _ResourceName_: the name of your Azure OpenAI Resource (required).\n- _ApiKey_: Azure OpenAI provides two methods for authentication. You can use either API Keys or Azure Active Directory (required).\n- _ApiVersion_: the version of the API to use (optional). Allowed values:\n  - 2023-05-15\n  - 2023-06-01-preview\n  - 2023-10-01-preview\n  - 2024-02-01\n  - 2024-02-15-preview\n  - 2024-03-01-preview\n  - 2024-04-01-preview\n  - 2024-05-01-preview  \n  - 2024-06-01\n  - 2024-07-01-preview\n  - 2024-08-01-preview\n  - 2024-09-01-preview\n  - 2024-10-01-preview\n  - 2024-10-21 (default)\n- _AuthenticationType_: it specifies if the key is an actual API Key or an [Azure Active Directory token](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/managed-identity) (optional, default: \"ApiKey\").\n\n### DefaultModel and DefaultEmbeddingModel\n\nChatGPT can be used with different models for chat completion, both on OpenAI and Azure OpenAI service. With the *DefaultModel* property, you can specify the default model that will be used, unless you pass an explicit value in the **AskAsync** or **AsyStreamAsync** methods.\n\nEven if it is not a strictly necessary for chat conversation, the library supports also the Embedding API, on both [OpenAI](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) and [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#embeddings).  As for chat completion, embeddings can be done with different models. With the *DefaultEmbeddingModel* property, you can specify the default model that will be used, unless you pass an explicit value in the **GetEmbeddingAsync** method.\n\n##### OpenAI\n\nCurrently available models are:\n- gpt-3.5-turbo,\n- gpt-3.5-turbo-16k,\n- gpt-4,\n- gpt-4-32k\n- gpt-4-turbo\n- gpt-4o\n- gpt-4o-mini\n- o1-preview\n- o1-mini\n\nThey have fixed names, available in the [OpenAIChatGptModels.cs file](https://github.com/marcominerva/ChatGptNet/blob/master/src/ChatGptNet/Models/OpenAIChatGptModels.cs).\n\n##### Azure OpenAI Service\n\nIn Azure OpenAI Service, you're required to first [deploy a model](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#deploy-a-model) before you can make calls. When you deploy a model, you need to assign it a name, that must match the name you use with **ChatGptNet**.\n\n\u003e **Note**\nSome models are not available in all regions. You can refer to [Model Summary table and region availability page](https://learn.microsoft.com/azure/cognitive-services/openai/concepts/models#model-summary-table-and-region-availability) to check current availabilities.\n\n### Caching, MessageLimit and MessageExpiration\n\nChatGPT is aimed to support conversational scenarios: user can talk to ChatGPT without specifying the full context for every interaction. However, conversation history isn't managed by OpenAI or Azure OpenAI service, so it's up to us to retain the current state. By default, **ChatGptNet** handles this requirement using a [MemoryCache](https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.caching.memory.memorycache) that stores messages for each conversation. The behavior can be set using the following properties:\n\n* *MessageLimit*: specifies how many messages for each conversation must be saved. When this limit is reached, oldest messages are automatically removed.\n* *MessageExpiration*: specifies the time interval used to maintain messages in cache, regardless their count.\n\nIf necessary, it is possibile to provide a custom Cache by implementing the [IChatGptCache](https://github.com/marcominerva/ChatGptNet/blob/master/src/ChatGptNet/IChatGptCache.cs) interface and then calling the **WithCache** extension method:\n\n```csharp\npublic class LocalMessageCache : IChatGptCache\n{\n    private readonly Dictionary\u003cGuid, IEnumerable\u003cChatGptMessage\u003e\u003e localCache = new();\n\n    public Task SetAsync(Guid conversationId, IEnumerable\u003cChatGptMessage\u003e messages, TimeSpan expiration, CancellationToken cancellationToken = default)\n    {\n        localCache[conversationId] = messages.ToList();\n        return Task.CompletedTask;\n    }\n\n    public Task\u003cIEnumerable\u003cChatGptMessage\u003e?\u003e GetAsync(Guid conversationId, CancellationToken cancellationToken = default)\n    {\n        localCache.TryGetValue(conversationId, out var messages);\n        return Task.FromResult(messages);\n    }\n\n    public Task RemoveAsync(Guid conversationId, CancellationToken cancellationToken = default)\n    {\n        localCache.Remove(conversationId);\n        return Task.CompletedTask;\n    }\n\n    public Task\u003cbool\u003e ExistsAsync(Guid conversationId, CancellationToken cancellationToken = default)\n    {\n        var exists = localCache.ContainsKey(conversationId);\n        return Task.FromResult(exists);\n    }\n}\n\n// Registers the custom cache at application startup.\nbuilder.Services.AddChatGpt(/* ... */).WithCache\u003cLocalMessageCache\u003e();\n```\n\nWe can also set ChatGPT parameters for chat completion at startup. Check the [official documentation](https://platform.openai.com/docs/api-reference/chat/create) for the list of available parameters and their meaning.\n\n### Configuration using an external source\n\nThe configuration can be automatically read from [IConfiguration](https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.configuration.iconfiguration), using for example a _ChatGPT_ section in the _appsettings.json_ file:\n\n```\n\"ChatGPT\": {\n    \"Provider\": \"OpenAI\",               // Optional. Allowed values: OpenAI (default) or Azure\n    \"ApiKey\": \"\",                       // Required\n    //\"Organization\": \"\",               // Optional, used only by OpenAI\n    \"ResourceName\": \"\",                 // Required when using Azure OpenAI Service\n    \"ApiVersion\": \"2024-10-21\", // Optional, used only by Azure OpenAI Service (default: 2024-10-21)\n    \"AuthenticationType\": \"ApiKey\",     // Optional, used only by Azure OpenAI Service. Allowed values: ApiKey (default) or ActiveDirectory\n\n    \"DefaultModel\": \"my-model\",\n    \"DefaultEmbeddingModel\": \"text-embedding-ada-002\", // Optional, set it if you want to use embedding\n    \"MessageLimit\": 20,\n    \"MessageExpiration\": \"00:30:00\",\n    \"ThrowExceptionOnError\": true       // Optional, default: true\n    //\"User\": \"UserName\",\n    //\"DefaultParameters\": {\n    //    \"Temperature\": 0.8,\n    //    \"TopP\": 1,\n    //    \"MaxTokens\": 500,\n    //    \"MaxCompletionTokens\": null,  // o1 series models support this property instead of MaxTokens\n    //    \"PresencePenalty\": 0,\n    //    \"FrequencyPenalty\": 0,\n    //    \"ResponseFormat\": { \"Type\": \"text\" }, // Allowed values for Type: text (default) or json_object\n    //    \"Seed\": 42                            // Optional (any integer value)\n    //},\n    //\"DefaultEmbeddingParameters\": {\n    //    \"Dimensions\": 1536\n    //}\n}\n```\n\nAnd then use the corresponding overload of che **AddChatGpt** method:\n\n```csharp\n// Adds ChatGPT service using settings from IConfiguration.\nbuilder.Services.AddChatGpt(builder.Configuration);\n```\n\n### Configuring ChatGptNet dinamically\n\nThe **AddChatGpt** method has also an overload that accepts an [IServiceProvider](https://learn.microsoft.com/dotnet/api/system.iserviceprovider) as argument. It can be used, for example, if we're in a Web API and we need to support scenarios in which every user has a different API Key that can be retrieved accessing a database via Dependency Injection:\n\n```csharp\nbuilder.Services.AddChatGpt((services, options) =\u003e\n{\n    var accountService = services.GetRequiredService\u003cIAccountService\u003e();\n\n    // Dynamically gets the API Key from the service.\n    var apiKey = \"...\"        \n\n    options.UseOpenAI(apiKyey);\n});\n```\n\n### Configuring ChatGptNet using both IConfiguration and code\n\nIn more complex scenarios, it is possible to configure **ChatGptNet** using both code and [IConfiguration](https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.configuration.iconfiguration). This can be useful if we want to set a bunch of common properties, but at the same time we need some configuration logic. For example:\n\n```csharp\nbuilder.Services.AddChatGpt((services, options) =\u003e\n{\n    // Configure common properties (message limit and expiration, default parameters, ecc.) using IConfiguration.\n    options.UseConfiguration(builder.Configuration);\n\n    var accountService = services.GetRequiredService\u003cIAccountService\u003e();\n\n    // Dynamically gets the API Key from the service.\n    var apiKey = \"...\"        \n\n    options.UseOpenAI(apiKyey);\n});\n```\n\n### Configuring HTTP Client\n\n**ChatGptNet** uses an [HttpClient](https://docs.microsoft.com/dotnet/api/system.net.http.httpclient) to call the chat completion and embedding APIs. If you need to customize it, you can use the overload of the **AddChatGpt** method that accepts an [Action\u0026lt;IHttpClientBuiler\u0026gt;](https://learn.microsoft.com/dotnet/api/microsoft.extensions.dependencyinjection.ihttpclientbuilder) as argument. For example, if you want to add resiliency to the HTTP client (let's say a retry policy), you can use [Polly](https://github.com/App-vNext/Polly):\n\n```csharp\n// using Microsoft.Extensions.DependencyInjection;\n// Requires: Microsoft.Extensions.Http.Resilience\n\nbuilder.Services.AddChatGpt(context.Configuration,\n    httpClient =\u003e\n    {\n        // Configures retry policy on the inner HttpClient using Polly.\n        httpClient.AddStandardResilienceHandler(options =\u003e\n        {\n            options.AttemptTimeout.Timeout = TimeSpan.FromMinutes(1);\n            options.CircuitBreaker.SamplingDuration = TimeSpan.FromMinutes(3);\n            options.TotalRequestTimeout.Timeout = TimeSpan.FromMinutes(3);\n        });\n    })\n```\n\nMore information about this topic is available on the [official documentation](https://learn.microsoft.com/dotnet/core/resilience/http-resilience).\n\n## Usage\n\nThe library can be used in any .NET application built with .NET 6.0 or later. For example, we can create a Minimal API in this way:\n\n```csharp\napp.MapPost(\"/api/chat/ask\", async (Request request, IChatGptClient chatGptClient) =\u003e\n{\n    var response = await chatGptClient.AskAsync(request.ConversationId, request.Message);\n    return TypedResults.Ok(response);\n})\n.WithOpenApi();\n\n// ...\n\npublic record class Request(Guid ConversationId, string Message);\n```\n\nIf we just want to retrieve the response message, we can call the **GetContent** method:\n\n```csharp\nvar content = response.GetContent();\n```\n\n\u003e **Note**\nIf the response has been filtered by the content filtering system, **GetContent** will return *null*. So, you should always check the `response.IsContentFiltered` property before trying to access to the actual content.\n\n#### Using parameters\n\nUsing configuration, it is possible to set default parameters for chat completion. However, we can also specify parameters for each request, using the **AskAsync** or **AskStreamAsync** overloads that accepts a [ChatGptParameters](https://github.com/marcominerva/ChatGptNet/blob/master/src/ChatGptNet/Models/ChatGptParameters.cs) object:\n\n```csharp\nvar response = await chatGptClient.AskAsync(conversationId, message, new ChatGptParameters\n{\n    MaxTokens = 150,\n    Temperature = 0.7\n});\n```\n\nWe don't need to specify all the parameters, only the ones we want to override. The other ones will be taken from the default configuration.\n\n##### Seed and system fingerprint\n\nChatGPT is known to be non deterministic. This means that the same input can produce different outputs. To try to control this behavior, we can use the _Temperature_ and _TopP_ parameters. For example, setting the _Temperature_ to values near to 0 makes the model more deterministic, while setting it to values near to 1 makes the model more creative.\nHowever, this is not always enough to get the same output for the same input. To address this issue, OpenAI introduced the **Seed** parameter. If specified, the model should sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Nevertheless, determinism is not guaranteed neither in this case, and you should refer to the _SystemFingerprint_ response parameter to monitor changes in the backend. Changes in this values mean that the backend configuration has changed, and this might impact determinism.\n\nAs always, the _Seed_ property can be specified in the default configuration or in the **AskAsync** or **AskStreamAsync** overloads that accepts a [ChatGptParameters](https://github.com/marcominerva/ChatGptNet/blob/master/src/ChatGptNet/Models/ChatGptParameters.cs).\n\n\u003e **Note**\n_Seed_ and _SystemFingerprint_ are only supported by the most recent models, such as _gpt-4-1106-preview_.\n\n##### Response format\n\nIf you want to forse the response in JSON format, you can use the _ResponseFormat_ parameter:\n\n```csharp\nvar response = await chatGptClient.AskAsync(conversationId, message, new ChatGptParameters\n{\n    ResponseFormat = ChatGptResponseFormat.Json,\n});\n```\n\nIn this way, the response will always be a valid JSON. Note that must also instruct the model to produce JSON via a system or user message. If you don't do this, the model will return an error.\n\n\nAs always, the _ResponseFormat_ property can be specified in the default configuration or in the **AskAsync** or **AskStreamAsync** overloads that accepts a [ChatGptParameters](https://github.com/marcominerva/ChatGptNet/blob/master/src/ChatGptNet/Models/ChatGptParameters.cs).\n\n\u003e **Note**\n_ResponseFormat_ is only supported by the most recent models, such as _gpt-4-1106-preview_.\n\n### Handling a conversation\n\nThe **AskAsync** and **AskStreamAsync** (see below) methods provides overloads that require a *conversationId* parameter. If we pass an empty value, a random one is generated and returned.\nWe can pass this value in subsequent invocations of **AskAsync** or **AskStreamAsync**, so that the library automatically retrieves previous messages of the current conversation (according to *MessageLimit* and *MessageExpiration* settings) and send them to chat completion API.\n\nThis is the default behavior for all the chat interactions. If you want to exlude a particular interaction from the conversation history, you can set the *addToConversationHistory* argument to *false*:\n\n```csharp\nvar response = await chatGptClient.AskAsync(conversationId, message, addToConversationHistory: false);\n```\n\nIn this way, the message will be sent to the chat completion API, but it and the corresponding answer from ChatGPT will not be added to the conversation history.\n\nOn the other hand, in some scenarios, it could be useful to manually add a chat interaction (i.e., a question followed by an answer) to the conversation history. For example, we may want to add a message that was generated by a bot. In this case, we can use the **AddInteractionAsync** method:\n\n```csharp\nawait chatGptClient.AddInteractionAsync(conversationId, question: \"What is the weather like in Taggia?\",\n    answer: \"It's Always Sunny in Taggia\");\n```\n\nThe question will be added as *user* message and the answer will be added as *assistant* message in the conversation history. As always, these new messages (respecting the *MessageLimit* option) will be used in subsequent invocations of **AskAsync** or **AskStreamAsync**.\n\n### Response streaming\n\nChat completion API supports response streaming. When using this feature, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available. **ChatGptNet** provides response streaming using the **AskStreamAsync** method:\n\n```csharp\n// Requests a streaming response.\nvar responseStream = chatGptClient.AskStreamAsync(conversationId, message);\n\nawait foreach (var response in responseStream)\n{\n    Console.Write(response.GetContent());\n    await Task.Delay(80);\n}\n```\n\n![](https://raw.githubusercontent.com/marcominerva/ChatGptNet/master/assets/ChatGptConsoleStreaming.gif)\n\n\u003e **Note**\nIf the response has been filtered by the content filtering system, the **GetContent** method in the _foreach_ will return *null* strings. So, you should always check the `response.IsContentFiltered` property before trying to access to the actual content.\n\nResponse streaming works by returning an [IAsyncEnumerable](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.iasyncenumerable-1), so it can be used even in a Web API project:\n\n```csharp\napp.MapGet(\"/api/chat/stream\", (Guid? conversationId, string message, IChatGptClient chatGptClient) =\u003e\n{\n    async IAsyncEnumerable\u003cstring?\u003e Stream()\n    {\n        // Requests a streaming response.\n        var responseStream = chatGptClient.AskStreamAsync(conversationId.GetValueOrDefault(), message);\n\n        // Uses the \"AsDeltas\" extension method to retrieve partial message deltas only.\n        await foreach (var delta in responseStream.AsDeltas())\n        {\n            yield return delta;\n            await Task.Delay(50);\n        }\n    }\n\n    return Stream();\n})\n.WithOpenApi();\n```\n\n![](https://raw.githubusercontent.com/marcominerva/ChatGptNet/master/assets/ChatGptApiStreaming.gif)\n\n\u003e **Note**\nIf the response has been filtered by the content filtering system, the **AsDeltas** method in the _foreach_ will return *nulls* string.\n\nThe library is 100% compatible also with Blazor WebAssembly applications:\n\n![](https://raw.githubusercontent.com/marcominerva/ChatGptNet/master/assets/ChatGptBlazor.WasmStreaming.gif)\n\nCheck out the [Samples folder](https://github.com/marcominerva/ChatGptNet/tree/master/samples) for more information about the different implementations.\n\n## Changing the assistant's behavior\n\nChatGPT supports messages with the _system_ role to influence how the assistant should behave. For example, we can tell to ChatGPT something like that:\n\n- You are an helpful assistant\n- Answer like Shakespeare\n- Give me only wrong answers\n- Answer in rhyme\n\n**ChatGptNet** provides this feature using the **SetupAsync** method:\n\n```csharp\nvar conversationId await = chatGptClient.SetupAsync(\"Answer in rhyme\");\n```\n\nIf we use the same *conversationId* when calling  **AskAsync**, then the *system* message will be automatically sent along with every request, so that the assistant will know how to behave.\n\n\u003e **Note**\nThe *system* message does not count for messages limit number.\n\n### Deleting a conversation\n\nConversation history is automatically deleted when expiration time (specified by *MessageExpiration* property) is reached. However, if necessary it is possible to immediately clear the history:\n\n```csharp\nawait chatGptClient.DeleteConversationAsync(conversationId, preserveSetup: false);\n```\n\nThe _preserveSetup_ argument allows to decide whether mantain also the _system_ message that has been set with the **SetupAsync** method (default: _false_).\n\n## Tool and Function calling\n\nWith function calling, we can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions. This is a new way to more reliably connect GPT's capabilities with external tools and APIs.\n\n**ChatGptNet** fully supports function calling by providing an overload of the **AskAsync** method that allows to specify function definitions. If this parameter is supplied, then the model will decide when it is appropiate to use one the functions. For example:\n\n```csharp\nvar functions = new List\u003cChatGptFunction\u003e\n{\n    new()\n    {\n        Name = \"GetCurrentWeather\",\n        Description = \"Get the current weather\",\n        Parameters = JsonDocument.Parse(\"\"\"                                        \n        {\n            \"type\": \"object\",\n            \"properties\": {\n                \"location\": {\n                    \"type\": \"string\",\n                    \"description\": \"The city and/or the zip code\"\n                },\n                \"format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"celsius\", \"fahrenheit\"],\n                    \"description\": \"The temperature unit to use. Infer this from the user's location.\"\n                }\n            },\n            \"required\": [\"location\", \"format\"]\n        }\n        \"\"\")\n    },\n    new()\n    {\n        Name = \"GetWeatherForecast\",\n        Description = \"Get an N-day weather forecast\",\n        Parameters = JsonDocument.Parse(\"\"\"                                        \n        {\n            \"type\": \"object\",\n            \"properties\": {\n                \"location\": {\n                    \"type\": \"string\",\n                    \"description\": \"The city and/or the zip code\"\n                },\n                \"format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"celsius\", \"fahrenheit\"],\n                    \"description\": \"The temperature unit to use. Infer this from the user's location.\"\n                },\n                \"daysNumber\": {\n                    \"type\": \"integer\",\n                    \"description\": \"The number of days to forecast\"\n                }\n            },\n            \"required\": [\"location\", \"format\", \"daysNumber\"]\n        }\n        \"\"\")\n    }\n};\n\nvar toolParameters = new ChatGptToolParameters\n{\n    FunctionCall = ChatGptToolChoices.Auto,   // This is the default if functions are present.\n    Functions = functions\n};\n\nvar response = await chatGptClient.AskAsync(\"What is the weather like in Taggia?\", toolParameters);\n```\n\nWe can pass an arbitrary number of functions, each one with a name, a description and a JSON schema describing the function parameters, following the [JSON Schema references](https://json-schema.org/understanding-json-schema). Under the hood, functions are injected into the system message in a syntax the model has been trained on. This means functions count against the model's context limit and are billed as input tokens. \n\nThe response object returned by the **AskAsync** method provides a property to check if the model has selected a function call:\n\n```csharp\nif (response.ContainsFunctionCalls())\n{\n    Console.WriteLine(\"I have identified a function to call:\");\n\n    var functionCall = response.GetFunctionCall()!;\n\n    Console.WriteLine(functionCall.Name);\n    Console.WriteLine(functionCall.Arguments);\n}\n```\n\nThis code will print something like this:\n\n```\nI have identified a function to call:\nGetCurrentWeather\n{\n    \"location\": \"Taggia\",\n    \"format\": \"celsius\"\n}\n```\n\nNote that the API will not actually execute any function calls. It is up to developers to execute function calls using model outputs.\n\nAfter the actual execution, we need to call the **AddToolResponseAsync** method on the **ChatGptClient** to add the response to the conversation history, just like a standard message, so that it will be automatically used for chat completion:\n\n```csharp\n// Calls the remote function API.\nvar functionResponse = await GetWeatherAsync(functionCall.Arguments);\nawait chatGptClient.AddToolResponseAsync(conversationId, functionCall, functionResponse);\n```\n\nNewer models like _gpt-4-turbo_ support a more general approach to functions, the **Tool calling**. When you send a request, you can specify a list of tools the model may call. Currently, only functions are supported, but in future release other types of tools will be available.\n\nTo use Tool calling instead of direct Function calling, you need to set the _ToolChoice_ and _Tools_ properties in the **ChatGptToolParameters** object (instead of _FunctionCall_ and _Function_, as in previous example):\n\n```csharp\nvar toolParameters = new ChatGptToolParameters\n{\n    ToolChoice = ChatGptToolChoices.Auto,   // This is the default if functions are present.\n    Tools = functions.ToTools()\n};\n```\n\nThe **ToTools** extension method is used to convert a list of [ChatGptFunction](https://github.com/marcominerva/ChatGptNet/blob/master/src/ChatGptNet/Models/ChatGptFunction.cs) to a list of tools.\n\nIf you use this new approach, of course you still need to check if the model has selected a tool call, using the same approach shown before.\nThen, after the actual execution of the function, you have to call the **AddToolResponseAsync** method, but in this case you need to specify the tool (not the function) to which the response refers:\n\n```csharp\nvar tool = response.GetToolCalls()!.First();\nvar functionCall = response.GetFunctionCall()!;\n\n// Calls the remote function API.\nvar functionResponse = await GetWeatherAsync(functionCall.Arguments);\n\nawait chatGptClient.AddToolResponseAsync(conversationId, tool, functionResponse);\n```\n\nFinally, you need to resend the original message to the chat completion API, so that the model can continue the conversation taking into account the function call response. Check out the [Function calling sample](https://github.com/marcominerva/ChatGptNet/blob/master/samples/ChatGptFunctionCallingConsole/Application.cs#L18) for a complete implementation of this workflow.\n\n## Content filtering\n\nWhen using Azure OpenAI Service, we automatically get content filtering for free. For details about how it works, check out the [documentation](https://learn.microsoft.com/azure/ai-services/openai/concepts/content-filter). This information is returned for all scenarios when using API version `2023-06-01-preview` or later. **ChatGptNet** fully supports this object model by providing the corresponding properties in the [ChatGptResponse](https://github.com/marcominerva/ChatGptNet/blob/master/src/ChatGptNet/Models/ChatGptResponse.cs#L57) and [ChatGptChoice](https://github.com/marcominerva/ChatGptNet/blob/master/src/ChatGptNet/Models/ChatGptChoice.cs#L26) classes.\n\n## Embeddings\n\n[Embeddings](https://platform.openai.com/docs/guides/embeddings) allows to transform text into a vector space. This can be useful to compare the similarity of two sentences, for example. **ChatGptNet** fully supports this feature by providing the **GetEmbeddingAsync** method:\n\n```csharp\nvar response = await chatGptClient.GenerateEmbeddingAsync(message);\nvar embeddings = response.GetEmbedding();\n```\n\nThis code will give you a float array containing all the embeddings for the specified message. The length of the array depends on the model used:\n\n| Model| Output dimension |\n| - | - |\n| text-embedding-ada-002 | 1536 |\n| text-embedding-3-small | 1536 |\n| text-embedding-3-large | 3072 |\n\nNewer models like _text-embedding-3-small_ and _text-embedding-3-large_ allows developers to trade-off performance and cost of using embeddings. Specifically, developers can shorten embeddings without the embedding losing its concept-representing properties.\n\nAs for ChatGPT, this settings can be done in various ways:\n\n- Via code:\n\n```csharp\nbuilder.Services.AddChatGpt(options =\u003e\n{\n    // ...\n\n    options.DefaultEmbeddingParameters = new EmbeddingParameters\n    {\n        Dimensions = 256\n    };\n});\n```\n\n- Using the _appsettings.json_ file:\n\n```\n\"ChatGPT\": {    \n    \"DefaultEmbeddingParameters\": {\n        \"Dimensions\": 256\n    }\n}\n```\n\nThen, if you want to change the dimension for a particular request, you can specify the *EmbeddingParameters* argument in the **GetEmbeddingAsync** invocation:\n\n```csharp\nvar response = await chatGptClient.GenerateEmbeddingAsync(request.Message, new EmbeddingParameters\n{\n    Dimensions = 512\n});\n\nvar embeddings = response.GetEmbedding();   // The length of the array is 512\n```\n\nIf you need to calculate the [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) between two embeddings, you can use the **EmbeddingUtility.CosineSimilarity** method.\n\n## Documentation\n\nThe full technical documentation is available [here](https://github.com/marcominerva/ChatGptNet/tree/master/docs).\n\n## Contribute\n\nThe project is constantly evolving. Contributions are welcome. Feel free to file issues and pull requests on the repo and we'll address them as we can. \n\n\u003e **Warning**\nRemember to work on the **develop** branch, don't use the **master** branch directly. Create Pull Requests targeting **develop**.\n","funding_links":[],"categories":["hacktoberfest","Open API","chatgpt"],"sub_categories":["提示语（魔法）"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmarcominerva%2FChatGptNet","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmarcominerva%2FChatGptNet","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmarcominerva%2FChatGptNet/lists"}