{"id":26359130,"url":"https://github.com/assemblyai/assemblyai-node-sdk","last_synced_at":"2025-04-06T18:13:30.131Z","repository":{"id":32925380,"uuid":"146362879","full_name":"AssemblyAI/assemblyai-node-sdk","owner":"AssemblyAI","description":"The AssemblyAI JavaScript SDK provides an easy-to-use interface for interacting with the AssemblyAI API, which supports async and real-time transcription, audio intelligence models, as well as the latest LeMUR models.","archived":false,"fork":false,"pushed_at":"2024-10-17T22:44:06.000Z","size":10755,"stargazers_count":34,"open_issues_count":2,"forks_count":11,"subscribers_count":7,"default_branch":"main","last_synced_at":"2024-10-20T09:49:05.254Z","etag":null,"topics":["ai","asr","assemblyai","llm","nodejs","speech-to-text","transcription"],"latest_commit_sha":null,"homepage":"https://www.assemblyai.com","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/AssemblyAI.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-08-27T22:49:41.000Z","updated_at":"2024-10-17T22:44:09.000Z","dependencies_parsed_at":"2023-10-14T22:18:34.185Z","dependency_job_id":"a9eecc94-7461-4cab-bce7-221e6505626c","html_url":"https://github.com/AssemblyAI/assemblyai-node-sdk","commit_stats":{"total_commits":14,"total_committers":2,"mean_commits":7.0,"dds":0.5,"last_synced_commit":"0407b20263e17558f0815e9b5ec17167f604452d"},"previous_names":[],"tags_count":34,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AssemblyAI%2Fassemblyai-node-sdk","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AssemblyAI%2Fassemblyai-node-sdk/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AssemblyAI%2Fassemblyai-node-sdk/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AssemblyAI%2Fassemblyai-node-sdk/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/AssemblyAI","download_url":"https://codeload.github.com/AssemblyAI/assemblyai-node-sdk/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247526753,"owners_count":20953143,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","asr","assemblyai","llm","nodejs","speech-to-text","transcription"],"created_at":"2025-03-16T15:58:48.298Z","updated_at":"2025-04-06T18:13:30.093Z","avatar_url":"https://github.com/AssemblyAI.png","language":"TypeScript","readme":"\u003cimg src=\"https://github.com/AssemblyAI/assemblyai-node-sdk/blob/main/assemblyai.png?raw=true\" width=\"500\"/\u003e\n\n---\n\n[![npm](https://img.shields.io/npm/v/assemblyai)](https://www.npmjs.com/package/assemblyai)\n[![Test](https://github.com/AssemblyAI/assemblyai-node-sdk/actions/workflows/test.yml/badge.svg)](https://github.com/AssemblyAI/assemblyai-node-sdk/actions/workflows/test.yml)\n[![GitHub License](https://img.shields.io/github/license/AssemblyAI/assemblyai-node-sdk)](https://github.com/AssemblyAI/assemblyai-node-sdk/blob/main/LICENSE)\n[![AssemblyAI Twitter](https://img.shields.io/twitter/follow/AssemblyAI?label=%40AssemblyAI\u0026style=social)](https://twitter.com/AssemblyAI)\n[![AssemblyAI YouTube](https://img.shields.io/youtube/channel/subscribers/UCtatfZMf-8EkIwASXM4ts0A)](https://www.youtube.com/@AssemblyAI)\n[![Discord](https://img.shields.io/discord/875120158014853141?logo=discord\u0026label=Discord\u0026link=https%3A%2F%2Fdiscord.com%2Fchannels%2F875120158014853141\u0026style=social)\n](https://assembly.ai/discord)\n\n# AssemblyAI JavaScript SDK\n\nThe AssemblyAI JavaScript SDK provides an easy-to-use interface for interacting with the AssemblyAI API,\nwhich supports async and real-time transcription, as well as the latest LeMUR models.\nIt is written primarily for Node.js in TypeScript with all types exported, but also [compatible with other runtimes](./docs/compat.md).\n\n## Documentation\n\nVisit the [AssemblyAI documentation](https://www.assemblyai.com/docs) for step-by-step instructions and a lot more details about our AI models and API.\nExplore the [SDK API reference](https://assemblyai.github.io/assemblyai-node-sdk/) for more details on the SDK types, functions, and classes.\n\n## Quickstart\n\nInstall the AssemblyAI SDK using your preferred package manager:\n\n```bash\nnpm install assemblyai\n```\n\n```bash\nyarn add assemblyai\n```\n\n```bash\npnpm add assemblyai\n```\n\n```bash\nbun add assemblyai\n```\n\nThen, import the `assemblyai` module and create an AssemblyAI object with your API key:\n\n```js\nimport { AssemblyAI } from \"assemblyai\";\n\nconst client = new AssemblyAI({\n  apiKey: process.env.ASSEMBLYAI_API_KEY,\n});\n```\n\nYou can now use the `client` object to interact with the AssemblyAI API.\n\n### Using a CDN\n\nYou can use automatic CDNs like [UNPKG](https://unpkg.com/) to load the library from a script tag.\n\n- Replace `:version` with the desired version or `latest`.\n- Remove `.min` to load the non-minified version.\n- Remove `.streaming` to load the entire SDK. Keep `.streaming` to load the Streaming STT specific version.\n\n```html\n\u003c!-- Unminified full SDK --\u003e\n\u003cscript src=\"https://www.unpkg.com/assemblyai@:version/dist/assemblyai.umd.js\"\u003e\u003c/script\u003e\n\u003c!-- Minified full SDK --\u003e\n\u003cscript src=\"https://www.unpkg.com/assemblyai@:version/dist/assemblyai.umd.min.js\"\u003e\u003c/script\u003e\n\u003c!-- Unminified Streaming STT only --\u003e\n\u003cscript src=\"https://www.unpkg.com/assemblyai@:version/dist/assemblyai.streaming.umd.js\"\u003e\u003c/script\u003e\n\u003c!-- Minified Streaming STT only --\u003e\n\u003cscript src=\"https://www.unpkg.com/assemblyai@:version/dist/assemblyai.streaming.umd.min.js\"\u003e\u003c/script\u003e\n```\n\nThe script creates a global `assemblyai` variable containing all the services.\nHere's how you create a `RealtimeTranscriber` object.\n\n```js\nconst { RealtimeTranscriber } = assemblyai;\nconst transcriber = new RealtimeTranscriber({\n  token: \"[GENERATE TEMPORARY AUTH TOKEN IN YOUR API]\",\n  ...\n});\n```\n\nFor type support in your IDE, see [Reference types from JavaScript](./docs/reference-types-from-js.md).\n\n## Speech-To-Text\n\n### Transcribe audio and video files\n\n\u003cdetails open\u003e\n  \u003csummary\u003eTranscribe an audio file with a public URL\u003c/summary\u003e\n\nWhen you create a transcript, you can either pass in a URL to an audio file or upload a file directly.\n\n```js\n// Transcribe file at remote URL\nlet transcript = await client.transcripts.transcribe({\n  audio: \"https://assembly.ai/espn.m4a\",\n});\n```\n\n\u003e **Note**\n\u003e You can also pass a local file path, a stream, or a buffer as the `audio` property.\n\n`transcribe` queues a transcription job and polls it until the `status` is `completed` or `error`.\n\nIf you don't want to wait until the transcript is ready, you can use `submit`:\n\n```js\nlet transcript = await client.transcripts.submit({\n  audio: \"https://assembly.ai/espn.m4a\",\n});\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003eTranscribe a local audio file\u003c/summary\u003e\n\nWhen you create a transcript, you can either pass in a URL to an audio file or upload a file directly.\n\n```js\n// Upload a file via local path and transcribe\nlet transcript = await client.transcripts.transcribe({\n  audio: \"./news.mp4\",\n});\n```\n\n\u003e **Note:**\n\u003e You can also pass a file URL, a stream, or a buffer as the `audio` property.\n\n`transcribe` queues a transcription job and polls it until the `status` is `completed` or `error`.\n\nIf you don't want to wait until the transcript is ready, you can use `submit`:\n\n```js\nlet transcript = await client.transcripts.submit({\n  audio: \"./news.mp4\",\n});\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003eEnable additional AI models\u003c/summary\u003e\n\nYou can extract even more insights from the audio by enabling any of our [AI models](https://www.assemblyai.com/docs/audio-intelligence) using _transcription options_.\nFor example, here's how to enable [Speaker diarization](https://www.assemblyai.com/docs/speech-to-text/speaker-diarization) model to detect who said what.\n\n```js\nlet transcript = await client.transcripts.transcribe({\n  audio: \"https://assembly.ai/espn.m4a\",\n  speaker_labels: true,\n});\nfor (let utterance of transcript.utterances) {\n  console.log(`Speaker ${utterance.speaker}: ${utterance.text}`);\n}\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003eGet a transcript\u003c/summary\u003e\n\nThis will return the transcript object in its current state. If the transcript is still processing, the `status` field will be `queued` or `processing`. Once the transcript is complete, the `status` field will be `completed`.\n\n```js\nconst transcript = await client.transcripts.get(transcript.id);\n```\n\nIf you created a transcript using `.submit()`, you can still poll until the transcript `status` is `completed` or `error` using `.waitUntilReady()`:\n\n```js\nconst transcript = await client.transcripts.waitUntilReady(transcript.id, {\n  // How frequently the transcript is polled in ms. Defaults to 3000.\n  pollingInterval: 1000,\n  // How long to wait in ms until the \"Polling timeout\" error is thrown. Defaults to infinite (-1).\n  pollingTimeout: 5000,\n});\n```\n\n\u003c/details\u003e\n\u003cdetails\u003e\n  \u003csummary\u003eGet sentences and paragraphs\u003c/summary\u003e\n\n```js\nconst sentences = await client.transcripts.sentences(transcript.id);\nconst paragraphs = await client.transcripts.paragraphs(transcript.id);\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003eGet subtitles\u003c/summary\u003e\n\n```js\nconst charsPerCaption = 32;\nlet srt = await client.transcripts.subtitles(transcript.id, \"srt\");\nsrt = await client.transcripts.subtitles(transcript.id, \"srt\", charsPerCaption);\n\nlet vtt = await client.transcripts.subtitles(transcript.id, \"vtt\");\nvtt = await client.transcripts.subtitles(transcript.id, \"vtt\", charsPerCaption);\n```\n\n\u003c/details\u003e\n\u003cdetails\u003e\n  \u003csummary\u003eList transcripts\u003c/summary\u003e\n\nThis will return a page of transcripts you created.\n\n```js\nconst page = await client.transcripts.list();\n```\n\nYou can also paginate over all pages.\n\n```typescript\nlet previousPageUrl: string | null = null;\ndo {\n  const page = await client.transcripts.list(previousPageUrl);\n  previousPageUrl = page.page_details.prev_url;\n} while (previousPageUrl !== null);\n```\n\n\u003e [!NOTE]\n\u003e To paginate over all pages, you need to use the `page.page_details.prev_url`\n\u003e because the transcripts are returned in descending order by creation date and time.\n\u003e The first page is are the most recent transcript, and each \"previous\" page are older transcripts.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eDelete a transcript\u003c/summary\u003e\n\n```js\nconst res = await client.transcripts.delete(transcript.id);\n```\n\n\u003c/details\u003e\n\n### Transcribe in real-time\n\nCreate the real-time transcriber.\n\n```typescript\nconst rt = client.realtime.transcriber();\n```\n\nYou can also pass in the following options.\n\n```typescript\nconst rt = client.realtime.transcriber({\n  realtimeUrl: 'wss://localhost/override',\n  apiKey: process.env.ASSEMBLYAI_API_KEY // The API key passed to `AssemblyAI` will be used by default,\n  sampleRate: 16_000,\n  wordBoost: ['foo', 'bar']\n});\n```\n\n\u003e [!WARNING]\n\u003e Storing your API key in client-facing applications exposes your API key.\n\u003e Generate a temporary auth token on the server and pass it to your client.\n\u003e _Server code_:\n\u003e\n\u003e ```typescript\n\u003e const token = await client.realtime.createTemporaryToken({ expires_in = 60 });\n\u003e // TODO: return token to client\n\u003e ```\n\u003e\n\u003e _Client code_:\n\u003e\n\u003e ```typescript\n\u003e import { RealtimeTranscriber } from \"assemblyai\"; // or \"assemblyai/streaming\"\n\u003e // TODO: implement getToken to retrieve token from server\n\u003e const token = await getToken();\n\u003e const rt = new RealtimeTranscriber({\n\u003e   token,\n\u003e });\n\u003e ```\n\nYou can configure the following events.\n\n\u003c!-- prettier-ignore --\u003e\n```typescript\nrt.on(\"open\", ({ sessionId, expiresAt }) =\u003e console.log('Session ID:', sessionId, 'Expires at:', expiresAt));\nrt.on(\"close\", (code: number, reason: string) =\u003e console.log('Closed', code, reason));\nrt.on(\"transcript\", (transcript: TranscriptMessage) =\u003e console.log('Transcript:', transcript));\nrt.on(\"transcript.partial\", (transcript: PartialTranscriptMessage) =\u003e console.log('Partial transcript:', transcript));\nrt.on(\"transcript.final\", (transcript: FinalTranscriptMessage) =\u003e console.log('Final transcript:', transcript));\nrt.on(\"error\", (error: Error) =\u003e console.error('Error', error));\n```\n\nAfter configuring your events, connect to the server.\n\n```typescript\nawait rt.connect();\n```\n\nSend audio data via chunks.\n\n```typescript\n// Pseudo code for getting audio\ngetAudio((chunk) =\u003e {\n  rt.sendAudio(chunk);\n});\n```\n\nOr send audio data via a stream by piping to the real-time stream.\n\n```typescript\naudioStream.pipeTo(rt.stream());\n```\n\nClose the connection when you're finished.\n\n```typescript\nawait rt.close();\n```\n\n## Apply LLMs to your audio with LeMUR\n\nCall [LeMUR endpoints](https://www.assemblyai.com/docs/api-reference/lemur) to apply LLMs to your transcript.\n\n\u003cdetails open\u003e\n\u003csummary\u003ePrompt your audio with LeMUR\u003c/summary\u003e\n\n```js\nconst { response } = await client.lemur.task({\n  transcript_ids: [\"0d295578-8c75-421a-885a-2c487f188927\"],\n  prompt: \"Write a haiku about this conversation.\",\n});\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eSummarize with LeMUR\u003c/summary\u003e\n\n```js\nconst { response } = await client.lemur.summary({\n  transcript_ids: [\"0d295578-8c75-421a-885a-2c487f188927\"],\n  answer_format: \"one sentence\",\n  context: {\n    speakers: [\"Alex\", \"Bob\"],\n  },\n});\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eAsk questions\u003c/summary\u003e\n\n```js\nconst { response } = await client.lemur.questionAnswer({\n  transcript_ids: [\"0d295578-8c75-421a-885a-2c487f188927\"],\n  questions: [\n    {\n      question: \"What are they discussing?\",\n      answer_format: \"text\",\n    },\n  ],\n});\n```\n\n\u003c/details\u003e\n\u003cdetails\u003e\n\u003csummary\u003eGenerate action items\u003c/summary\u003e\n\n```js\nconst { response } = await client.lemur.actionItems({\n  transcript_ids: [\"0d295578-8c75-421a-885a-2c487f188927\"],\n});\n```\n\n\u003c/details\u003e\n\u003cdetails\u003e\n\u003csummary\u003eDelete LeMUR request\u003c/summary\u003e\n\n```js\nconst response = await client.lemur.purgeRequestData(lemurResponse.request_id);\n```\n\n\u003c/details\u003e\n\n## Contributing\n\nIf you want to contribute to the JavaScript SDK, follow the guidelines in [CONTRIBUTING.md](./CONTRIBUTING.md).\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fassemblyai%2Fassemblyai-node-sdk","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fassemblyai%2Fassemblyai-node-sdk","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fassemblyai%2Fassemblyai-node-sdk/lists"}