https://github.com/assemblyai/assemblyai-csharp-sdk
The AssemblyAI C# .NET SDK provides an easy-to-use interface for interacting with the AssemblyAI API, which supports async and real-time transcription, audio intelligence models, as well as the latest LeMUR models.
https://github.com/assemblyai/assemblyai-csharp-sdk
ai asr assemblyai csharp dotnet llm speech-to-text
Last synced: 7 months ago
JSON representation
The AssemblyAI C# .NET SDK provides an easy-to-use interface for interacting with the AssemblyAI API, which supports async and real-time transcription, audio intelligence models, as well as the latest LeMUR models.
- Host: GitHub
- URL: https://github.com/assemblyai/assemblyai-csharp-sdk
- Owner: AssemblyAI
- License: mit
- Created: 2023-10-20T18:36:00.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2025-02-05T09:49:30.000Z (8 months ago)
- Last Synced: 2025-03-07T14:38:32.889Z (7 months ago)
- Topics: ai, asr, assemblyai, csharp, dotnet, llm, speech-to-text
- Language: C#
- Homepage: https://www.assemblyai.com
- Size: 6.95 MB
- Stars: 8
- Watchers: 4
- Forks: 4
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
---
# AssemblyAI C# .NET SDK
[](https://www.nuget.org/packages/AssemblyAI)
[](https://github.com/AssemblyAI/assemblyai-csharp-sdk/blob/main/LICENSE)
[](https://twitter.com/AssemblyAI)
[](https://www.youtube.com/@AssemblyAI)
[
](https://assembly.ai/discord)The AssemblyAI C# SDK provides an easy-to-use interface for interacting with the AssemblyAI API from .NET, which supports async and real-time transcription, as well as the latest audio intelligence and LeMUR models.
The C# SDK is compatible with `.NET 6.0` and up, `.NET Framework 4.6.2` and up, and `.NET Standard 2.0`.## Documentation
Visit the [AssemblyAI documentation](https://www.assemblyai.com/docs) for step-by-step instructions and a lot more details about our AI models and API.
Explore [the SDK API reference](https://assemblyai.github.io/assemblyai-csharp-sdk/) for more details on the SDK types, functions, and classes.## Quickstart
You can find the `AssemblyAI` C# SDK on [NuGet](https://www.nuget.org/packages/AssemblyAI).
Add the latest version using the .NET CLI:```bash
dotnet add package AssemblyAI
```Then, create an AssemblyAIClient with your API key:
```csharp
using AssemblyAI;var client = new AssemblyAIClient(Environment.GetEnvironmentVariable("ASSEMBLYAI_API_KEY")!);
```You can now use the `client` object to interact with the AssemblyAI API.
## Add the AssemblyAIClient to the dependency injection container
The AssemblyAI SDK has built-in support for default .NET dependency injection container.
Add the `AssemblyAIClient` to the service collection like this:```csharp
using AssemblyAI;// build your services
services.AddAssemblyAIClient();```
By default, the `AssemblyAIClient` loads it configuration from the `AssemblyAI` section from the .NET configuration.
```json
{
"AssemblyAI": {
"ApiKey": "YOUR_ASSEMBLYAI_API_KEY"
}
}
```You can also configure the `AssemblyAIClient` other ways using the `AddAssemblyAIClient` overloads.
```csharp
using AssemblyAI;// build your services
services.AddAssemblyAIClient(options =>
{
options.ApiKey = Environment.GetEnvironmentVariable("ASSEMBLYAI_API_KEY")!;
});
```## Speech-To-Text
### Transcribe audio and video files
Transcribe an audio file with a public URL
When you create a transcript, you can either pass in a URL to an audio file or upload a file directly.
```csharp
using AssemblyAI;
using AssemblyAI.Transcripts;var client = new AssemblyAIClient(Environment.GetEnvironmentVariable("ASSEMBLYAI_API_KEY")!);
// Transcribe file at remote URL
var transcript = await client.Transcripts.TranscribeAsync(new TranscriptParams
{
AudioUrl = "https://assembly.ai/espn.m4a",
LanguageCode = TranscriptLanguageCode.EnUs
});// checks if transcript.Status == TranscriptStatus.Completed, throws an exception if not
transcript.EnsureStatusCompleted();Console.WriteLine(transcript.Text);
````TranscribeAsync` queues a transcription job and polls it until the `transcript.Status` is `completed` or `error`.
If you don't want to wait until the transcript is ready, you can use `submit`:
```csharp
transcript = await client.Transcripts.SubmitAsync(new TranscriptParams
{
AudioUrl = "https://assembly.ai/espn.m4a",
LanguageCode = TranscriptLanguageCode.EnUs
});
```Transcribe a local audio file
When you create a transcript, you can either pass in a URL to an audio file or upload a file directly.
```csharp
using AssemblyAI;
using AssemblyAI.Transcripts;var client = new AssemblyAIClient(Environment.GetEnvironmentVariable("ASSEMBLYAI_API_KEY")!);
// Transcribe file using file info
var transcript = await client.Transcripts.TranscribeAsync(
new FileInfo("./news.mp4"),
new TranscriptOptionalParams
{
LanguageCode = TranscriptLanguageCode.EnUs
}
);// Transcribe file from stream
await using var stream = new FileStream("./news.mp4", FileMode.Open);
transcript = await client.Transcripts.TranscribeAsync(
stream,
new TranscriptOptionalParams
{
LanguageCode = TranscriptLanguageCode.EnUs
}
);
````transcribe` queues a transcription job and polls it until the `status` is `completed` or `error`.
If you don't want to wait until the transcript is ready, you can use `submit`:
```csharp
transcript = await client.Transcripts.SubmitAsync(
new FileInfo("./news.mp4"),
new TranscriptOptionalParams
{
LanguageCode = TranscriptLanguageCode.EnUs
}
);
```Enable additional AI models
You can extract even more insights from the audio by enabling any of our [AI models](https://www.assemblyai.com/docs/audio-intelligence) using _transcription options_.
For example, here's how to enable [Speaker diarization](https://www.assemblyai.com/docs/speech-to-text/speaker-diarization) model to detect who said what.```csharp
var transcript = await client.Transcripts.TranscribeAsync(new TranscriptParams
{
AudioUrl = "https://assembly.ai/espn.m4a",
SpeakerLabels = true
});// checks if transcript.Status == TranscriptStatus.Completed, throws an exception if not
transcript.EnsureStatusCompleted();foreach (var utterance in transcript.Utterances)
{
Console.WriteLine($"Speaker {utterance.Speaker}: {utterance.Text}");
}
```Get a transcript
This will return the transcript object in its current state. If the transcript is still processing, the `Status` field will be `Queued` or `Processing`. Once the transcript is complete, the `Status` field will be `Completed`.
```csharp
var transcript = await client.Transcripts.GetAsync(transcript.Id);
```If you created a transcript using `.SubmitAsync(...)`, you can still poll until the transcript `Status` is `Completed` or `Error` using `.WaitUntilReady(...)`:
```csharp
transcript = await client.Transcripts.WaitUntilReady(
transcript.Id,
pollingInterval: TimeSpan.FromSeconds(1),
pollingTimeout: TimeSpan.FromMinutes(10)
);
```Get sentences and paragraphs
```csharp
var sentences = await client.Transcripts.GetSentencesAsync(transcript.Id);
var paragraphs = await client.Transcripts.GetParagraphsAsync(transcript.Id);
```Get subtitles
```csharp
const int charsPerCaption = 32;
var srt = await client.Transcripts.GetSubtitlesAsync(transcript.Id, SubtitleFormat.Srt);
srt = await client.Transcripts.GetSubtitlesAsync(transcript.Id, SubtitleFormat.Srt, charsPerCaption: charsPerCaption);var vtt = await client.Transcripts.GetSubtitlesAsync(transcript.Id, SubtitleFormat.Vtt);
vtt = await client.Transcripts.GetSubtitlesAsync(transcript.Id, SubtitleFormat.Vtt, charsPerCaption: charsPerCaption);
```List transcripts
This will return a page of transcripts you created.
```csharp
var page = await client.Transcripts.ListAsync();
```You can also paginate over all pages.
```csharp
var page = await client.Transcripts.ListAsync();
while(page.PageDetails.PrevUrl != null)
{
page = await client.Transcripts.ListAsync(page.PageDetails.PrevUrl);
}
```> [!NOTE]
> To paginate over all pages, you need to use the `page.PageDetails.PrevUrl`
> because the transcripts are returned in descending order by creation date and time.
> The first page is are the most recent transcript, and each "previous" page are older transcripts.Delete a transcript
```csharp
var transcript = await client.Transcripts.DeleteAsync(transcript.Id);
```### Transcribe in real-time
Create the real-time transcriber.
```csharp
using AssemblyAI;
using AssemblyAI.Realtime;var client = new AssemblyAIClient(Environment.GetEnvironmentVariable("ASSEMBLYAI_API_KEY")!);
await using var transcriber = client.Realtime.CreateTranscriber();
```You can also pass in the following options.
```csharp
using AssemblyAI;
using AssemblyAI.Realtime;await using var transcriber = client.Realtime.CreateTranscriber(new RealtimeTranscriberOptions
{
// If ApiKey is null, the API key passed to `AssemblyAIClient` will be
ApiKey: Environment.GetEnvironmentVariable("ASSEMBLYAI_API_KEY"),
RealtimeUrl = "wss://localhost/override",
SampleRate = 16_000,
WordBoost = new[] { "foo", "bar" }
});
```> [!WARNING]
> Storing your API key in client-facing applications exposes your API key.
> Generate a temporary auth token on the server and pass it to your client.
> _Server code_:
>
> ```csharp
> var token = await client.Realtime.CreateTemporaryTokenAsync(expiresIn: 60);
> // TODO: return token to client
> ```
>
> _Client code_:
>
> ```csharp
> using AssemblyAI;
> using AssemblyAI.Realtime;
>
> var token = await GetToken();
> await using var transcriber = new RealtimeTranscriber {
> Token = token.Token
> };
> ```You can configure the following events.
```csharp
transcriber.SessionBegins.Subscribe(
message => Console.WriteLine(
$"""
Session begins:
- Session ID: {message.SessionId}
- Expires at: {message.ExpiresAt}
""")
);transcriber.PartialTranscriptReceived.Subscribe(
transcript => Console.WriteLine("Partial transcript: {0}", transcript.Text)
);transcriber.FinalTranscriptReceived.Subscribe(
transcript => Console.WriteLine("Final transcript: {0}", transcript.Text)
);transcriber.TranscriptReceived.Subscribe(
transcript => Console.WriteLine("Transcript: {0}", transcript.Match(
partialTranscript => partialTranscript.Text,
finalTranscript => finalTranscript.Text
))
);transcriber.ErrorReceived.Subscribe(
error => Console.WriteLine("Error: {0}", error)
);
transcriber.Closed.Subscribe(
closeEvt => Console.WriteLine("Closed: {0} - {1}", closeEvt.Code, closeEvt.Reason)
);
```After configuring your events, connect to the server.
```csharp
await transcriber.ConnectAsync();
```Send audio data via chunks.
```csharp
// Pseudo code for getting audio
GetAudio(audioChunk => {
transcriber.SendAudio(audioChunk);
});
```Close the connection when you're finished.
```csharp
await transcriber.CloseAsync();
```## Apply LLMs to your audio with LeMUR
Call [LeMUR endpoints](https://www.assemblyai.com/docs/api-reference/lemur) to apply LLMs to your transcript.
Prompt your audio with LeMUR
```csharp
var response = await client.Lemur.TaskAsync(new LemurTaskParams
{
TranscriptIds = ["0d295578-8c75-421a-885a-2c487f188927"],
Prompt = "Write a haiku about this conversation.",
});
```Summarize with LeMUR
```csharp
var response = await client.Lemur.SummaryAsync(new LemurSummaryParams
{
TranscriptIds = ["0d295578-8c75-421a-885a-2c487f188927"],
AnswerFormat = "one sentence",
Context = new Dictionary
{
["Speaker"] = new[] { "Alex", "Bob" }
}
});
```Ask questions
```csharp
var response = await client.Lemur.QuestionAnswerAsync(new LemurQuestionAnswerParams
{
TranscriptIds = ["0d295578-8c75-421a-885a-2c487f188927"],
Questions = [
new LemurQuestion
{
Question = "What are they discussing?",
AnswerFormat = "text"
}
]
});
```Generate action items
```csharp
var response = await client.Lemur.ActionItemsAsync(new LemurActionItemsParams
{
TranscriptIds = ["0d295578-8c75-421a-885a-2c487f188927"]
});
```Delete LeMUR request
```csharp
var response = await client.Lemur.PurgeRequestDataAsync(lemurResponse.RequestId);
```