{"id":26359123,"url":"https://github.com/assemblyai/assemblyai-python-sdk","last_synced_at":"2026-03-12T23:01:06.551Z","repository":{"id":166609632,"uuid":"642082678","full_name":"AssemblyAI/assemblyai-python-sdk","owner":"AssemblyAI","description":"AssemblyAI's Official Python SDK","archived":false,"fork":false,"pushed_at":"2026-03-03T17:14:20.000Z","size":392,"stargazers_count":203,"open_issues_count":16,"forks_count":30,"subscribers_count":5,"default_branch":"master","last_synced_at":"2026-03-03T17:50:36.499Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://assemblyai.com","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/AssemblyAI.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2023-05-17T19:30:06.000Z","updated_at":"2026-03-03T17:13:36.000Z","dependencies_parsed_at":"2026-03-12T23:00:38.447Z","dependency_job_id":null,"html_url":"https://github.com/AssemblyAI/assemblyai-python-sdk","commit_stats":{"total_commits":134,"total_committers":6,"mean_commits":"22.333333333333332","dds":"0.26865671641791045","last_synced_commit":"27f92b84e239560bdb530f1dbc3ee51c911e5366"},"previous_names":["assemblyai/assemblyai-python-sdk"],"tags_count":91,"template":false,"template_full_name":null,"purl":"pkg:github/AssemblyAI/assemblyai-python-sdk","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AssemblyAI%2Fassemblyai-python-sdk","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AssemblyAI%2Fassemblyai-python-sdk/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AssemblyAI%2Fassemblyai-python-sdk/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AssemblyAI%2Fassemblyai-python-sdk/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/AssemblyAI","download_url":"https://codeload.github.com/AssemblyAI/assemblyai-python-sdk/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AssemblyAI%2Fassemblyai-python-sdk/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30448565,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-12T21:31:01.033Z","status":"ssl_error","status_checked_at":"2026-03-12T21:30:43.161Z","response_time":114,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-03-16T15:58:47.307Z","updated_at":"2026-03-12T23:01:06.544Z","avatar_url":"https://github.com/AssemblyAI.png","language":"Python","readme":"\u003cimg src=\"https://github.com/AssemblyAI/assemblyai-python-sdk/blob/master/assemblyai.png?raw=true\" width=\"500\"/\u003e\n\n---\n\n[![CI Passing](https://github.com/AssemblyAI/assemblyai-python-sdk/actions/workflows/test.yml/badge.svg)](https://github.com/AssemblyAI/assemblyai-python-sdk/actions/workflows/test.yml)\n[![GitHub License](https://img.shields.io/github/license/AssemblyAI/assemblyai-python-sdk)](https://github.com/AssemblyAI/assemblyai-python-sdk/blob/master/LICENSE)\n[![PyPI version](https://badge.fury.io/py/assemblyai.svg)](https://badge.fury.io/py/assemblyai)\n[![PyPI Python Versions](https://img.shields.io/pypi/pyversions/assemblyai)](https://pypi.python.org/pypi/assemblyai/)\n![PyPI - Wheel](https://img.shields.io/pypi/wheel/assemblyai)\n[![AssemblyAI Twitter](https://img.shields.io/twitter/follow/AssemblyAI?label=%40AssemblyAI\u0026style=social)](https://twitter.com/AssemblyAI)\n[![AssemblyAI YouTube](https://img.shields.io/youtube/channel/subscribers/UCtatfZMf-8EkIwASXM4ts0A)](https://www.youtube.com/@AssemblyAI)\n[![Discord](https://img.shields.io/discord/875120158014853141?logo=discord\u0026label=Discord\u0026link=https%3A%2F%2Fdiscord.com%2Fchannels%2F875120158014853141\u0026style=social)\n](https://assemblyai.com/discord)\n\n# AssemblyAI's Python SDK\n\n\u003e _Build with AI models that can transcribe and understand audio_\n\nWith a single API call, get access to AI models built on the latest AI breakthroughs to transcribe and understand audio and speech data securely at large scale.\n\n# Overview\n\n- [AssemblyAI's Python SDK](#assemblyais-python-sdk)\n- [Overview](#overview)\n- [Documentation](#documentation)\n- [Quick Start](#quick-start)\n  - [Installation](#installation)\n  - [Examples](#examples)\n    - [**Core Examples**](#core-examples)\n    - [**Speech Understanding Examples**](#speech-understanding-examples)\n    - [**Streaming Examples**](#streaming-examples)\n    - [**Change the default settings**](#change-the-default-settings)\n  - [Playground](#playground)\n- [Advanced](#advanced)\n  - [How the SDK handles Default Configurations](#how-the-sdk-handles-default-configurations)\n    - [Defining Defaults](#defining-defaults)\n    - [Overriding Defaults](#overriding-defaults)\n  - [Synchronous vs Asynchronous](#synchronous-vs-asynchronous)\n  - [Getting the HTTP status code](#getting-the-http-status-code)\n  - [Polling Intervals](#polling-intervals)\n  - [Retrieving Existing Transcripts](#retrieving-existing-transcripts)\n    - [Retrieving a Single Transcript](#retrieving-a-single-transcript)\n    - [Retrieving Multiple Transcripts as a Group](#retrieving-multiple-transcripts-as-a-group)\n    - [Retrieving Transcripts Asynchronously](#retrieving-transcripts-asynchronously)\n\n# Documentation\n\nVisit our [AssemblyAI API Documentation](https://www.assemblyai.com/docs) to get an overview of our models!\n\n# Quick Start\n\n## Installation\n\n```bash\npip install -U assemblyai\n```\n\n## Examples\n\nBefore starting, you need to set the API key. If you don't have one yet, [**sign up for one**](https://www.assemblyai.com/dashboard/signup)!\n\n```python\nimport assemblyai as aai\n\n# set the API key\naai.settings.api_key = f\"{ASSEMBLYAI_API_KEY}\"\n```\n\n---\n\n### **Core Examples**\n\n\u003cdetails\u003e\n  \u003csummary\u003eTranscribe a local audio file\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\naai.settings.base_url = \"https://api.assemblyai.com\"\naai.settings.api_key = \"YOUR_API_KEY\"\n\naudio_file = \"./example.mp3\"\n\nconfig = aai.TranscriptionConfig(\n    speech_models=[\"universal-3-pro\", \"universal-2\"],\n    language_detection=True,\n    speaker_labels=True,\n)\n\ntranscript = aai.Transcriber().transcribe(audio_file, config=config)\n\nif transcript.status == aai.TranscriptStatus.error:\n    raise RuntimeError(f\"Transcription failed: {transcript.error}\")\nprint(f\"\\nFull Transcript:\\n\\n{transcript.text}\")\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003eTranscribe an URL\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\naai.settings.base_url = \"https://api.assemblyai.com\"\naai.settings.api_key = \"YOUR_API_KEY\"\n\naudio_file = \"https://assembly.ai/wildfires.mp3\"\n\nconfig = aai.TranscriptionConfig(\n    speech_models=[\"universal-3-pro\", \"universal-2\"],\n    language_detection=True,\n    speaker_labels=True,\n)\n\ntranscript = aai.Transcriber().transcribe(audio_file, config=config)\n\nif transcript.status == aai.TranscriptStatus.error:\n    raise RuntimeError(f\"Transcription failed: {transcript.error}\")\nprint(f\"\\nFull Transcript:\\n\\n{transcript.text}\")\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003eTranscribe binary data\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\naai.settings.base_url = \"https://api.assemblyai.com\"\naai.settings.api_key = \"YOUR_API_KEY\"\n\ntranscriber = aai.Transcriber()\n\n# Binary data is supported directly:\ntranscript = transcriber.transcribe(data)\n\n# Or: Upload data separately:\nupload_url = transcriber.upload_file(data)\ntranscript = transcriber.transcribe(upload_url)\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003eExport subtitles of an audio file\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\naai.settings.api_key = \"\u003cYOUR_API_KEY\u003e\"\n\n# audio_file = \"./local_file.mp3\"\naudio_file = \"https://assembly.ai/wildfires.mp3\"\n\nconfig = aai.TranscriptionConfig(\n  speech_models=[\"universal-3-pro\", \"universal-2\"],\n  language_detection=True\n)\n\ntranscript = aai.Transcriber(config=config).transcribe(audio_file)\n\nif transcript.status == \"error\":\n  raise RuntimeError(f\"Transcription failed: {transcript.error}\")\n\nsrt = transcript.export_subtitles_srt(\n  # Optional: Customize the maximum number of characters per caption\n  chars_per_caption=32\n  )\n\nwith open(f\"transcript_{transcript.id}.srt\", \"w\") as srt_file:\n  srt_file.write(srt)\n\n# vtt = transcript.export_subtitles_vtt()\n\n# with open(f\"transcript_{transcript_id}.vtt\", \"w\") as vtt_file:\n#   vtt_file.write(vtt)\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003eList all sentences and paragraphs\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\naai.settings.api_key = \"\u003cYOUR_API_KEY\u003e\"\n\n# audio_file = \"./local_file.mp3\"\naudio_file = \"https://assembly.ai/wildfires.mp3\"\n\nconfig = aai.TranscriptionConfig(\n  speech_models=[\"universal-3-pro\", \"universal-2\"],\n  language_detection=True\n)\n\ntranscript = aai.Transcriber(config=config).transcribe(audio_file)\n\nif transcript.status == \"error\":\n  raise RuntimeError(f\"Transcription failed: {transcript.error}\")\n\nsentences = transcript.get_sentences()\nfor sentence in sentences:\n  print(sentence.text)\n  print()\n\nparagraphs = transcript.get_paragraphs()\nfor paragraph in paragraphs:\n  print(paragraph.text)\n  print()\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003eSearch for words in a transcript\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\naai.settings.api_key = \"\u003cYOUR_API_KEY\u003e\"\n\n# audio_file = \"./local_file.mp3\"\naudio_file = \"https://assembly.ai/wildfires.mp3\"\n\nconfig = aai.TranscriptionConfig(\n  speech_models=[\"universal-3-pro\", \"universal-2\"],\n  language_detection=True\n)\n\ntranscript = aai.Transcriber(config=config).transcribe(audio_file)\n\nif transcript.status == \"error\":\n  raise RuntimeError(f\"Transcription failed: {transcript.error}\")\n\n# Set the words you want to search for\nwords = [\"foo\", \"bar\", \"foo bar\", \"42\"]\n\nmatches = transcript.word_search(words)\n\nfor match in matches:\n  print(f\"Found '{match.text}' {match.count} times in the transcript\")\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003eAdd custom spellings on a transcript\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\naai.settings.api_key = \"\u003cYOUR_API_KEY\u003e\"\n\n# audio_file = \"./local_file.mp3\"\naudio_file = \"https://assembly.ai/wildfires.mp3\"\n\nconfig = aai.TranscriptionConfig(\n  speech_models=[\"universal-3-pro\", \"universal-2\"],\n  language_detection=True\n)\nconfig.set_custom_spelling(\n  {\n    \"Gettleman\": [\"gettleman\"],\n    \"SQL\": [\"Sequel\"],\n  }\n)\n\ntranscript = aai.Transcriber(config=config).transcribe(audio_file)\n\nif transcript.status == \"error\":\n  raise RuntimeError(f\"Transcription failed: {transcript.error}\")\n\nprint(transcript.text)\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003eUpload a file\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\nupload_url = transcriber.upload_file(data)\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003eDelete a transcript\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\naai.settings.api_key = \"\u003cYOUR_API_KEY\u003e\"\n\n# audio_file = \"./local_file.mp3\"\naudio_file = \"https://assembly.ai/wildfires.mp3\"\n\nconfig = aai.TranscriptionConfig(\n  speech_models=[\"universal-3-pro\", \"universal-2\"],\n  language_detection=True\n)\n\ntranscript = aai.Transcriber(config=config).transcribe(audio_file)\n\nif transcript.status == \"error\":\n  raise RuntimeError(f\"Transcription failed: {transcript.error}\")\n\nprint(transcript.text)\n\ntranscript.delete_by_id(transcript.id)\n\ntranscript = aai.Transcript.get_by_id(transcript.id)\nprint(transcript.text)\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003eList transcripts\u003c/summary\u003e\n\nThis returns a page of transcripts you created.\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\n\npage = transcriber.list_transcripts()\nprint(page.page_details)  # Page details\nprint(page.transcripts)  # List of transcripts\n```\n\nYou can apply filter parameters:\n\n```python\nparams = aai.ListTranscriptParameters(\n    limit=3,\n    status=aai.TranscriptStatus.completed,\n)\npage = transcriber.list_transcripts(params)\n```\n\nYou can also paginate over all pages by using the helper property `before_id_of_prev_url`.\n\nThe `prev_url` always points to a page with older transcripts. If you extract the `before_id`\nof the `prev_url` query parameters, you can paginate over all pages from newest to oldest.\n\n```python\ntranscriber = aai.Transcriber()\n\nparams = aai.ListTranscriptParameters()\n\npage = transcriber.list_transcripts(params)\nwhile page.page_details.before_id_of_prev_url is not None:\n    params.before_id = page.page_details.before_id_of_prev_url\n    page = transcriber.list_transcripts(params)\n```\n\n\u003c/details\u003e\n\n---\n\n### **Speech Understanding Examples**\n\n\u003cdetails\u003e\n  \u003csummary\u003ePII Redact a transcript\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\naai.settings.api_key = \"\u003cYOUR_API_KEY\u003e\"\n\n# audio_file = \"./local_file.mp3\"\naudio_file = \"https://assembly.ai/wildfires.mp3\"\n\nconfig = aai.TranscriptionConfig(\n    speech_models=[\"universal-3-pro\", \"universal-2\"],\n    language_detection=True,\n).set_redact_pii(\n    policies=[\n        aai.PIIRedactionPolicy.person_name,\n        aai.PIIRedactionPolicy.organization,\n        aai.PIIRedactionPolicy.occupation,\n    ],\n    substitution=aai.PIISubstitutionPolicy.hash,\n)\n\ntranscript = aai.Transcriber().transcribe(audio_file, config)\nprint(f\"Transcript ID:\", transcript.id)\n\nprint(transcript.text)\n```\n\nTo request a copy of the original audio file with the redacted information \"beeped\" out, set `redact_pii_audio=True` in the config.\nOnce the `Transcript` object is returned, you can access the URL of the redacted audio file with `get_redacted_audio_url`, or save the redacted audio directly to disk with `save_redacted_audio`.\n\n```python\nimport assemblyai as aai\n\naai.settings.api_key = \"\u003cYOUR_API_KEY\u003e\"\n\n# audio_file = \"./local_file.mp3\"\naudio_file = \"https://assembly.ai/wildfires.mp3\"\n\nconfig = aai.TranscriptionConfig(\n    speech_models=[\"universal-3-pro\", \"universal-2\"],\n    language_detection=True,\n).set_redact_pii(\n    policies=[\n        aai.PIIRedactionPolicy.person_name,\n        aai.PIIRedactionPolicy.organization,\n        aai.PIIRedactionPolicy.occupation,\n    ],\n    substitution=aai.PIISubstitutionPolicy.hash,\n    redact_audio=True\n)\n\ntranscript = aai.Transcriber().transcribe(audio_file, config)\nprint(f\"Transcript ID:\", transcript.id)\n\nprint(transcript.text)\nprint(transcript.get_redacted_audio_url())\n```\n\n[Read more about PII redaction here.](https://www.assemblyai.com/docs/pii-redaction)\n\n\u003c/details\u003e\n\u003cdetails\u003e\n  \u003csummary\u003eSummarize the content of a transcript over time\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\naai.settings.api_key = \"\u003cYOUR_API_KEY\u003e\"\n\n# audio_file = \"./local_file.mp3\"\naudio_file = \"https://assembly.ai/wildfires.mp3\"\n\nconfig = aai.TranscriptionConfig(\n    speech_models=[\"universal-3-pro\", \"universal-2\"],\n    language_detection=True,\n    auto_chapters=True\n)\n\ntranscript = aai.Transcriber().transcribe(audio_file, config)\nprint(f\"Transcript ID:\", transcript.id)\n\nfor chapter in transcript.chapters:\n  print(f\"{chapter.start}-{chapter.end}: {chapter.headline}\")\n```\n\n[Read more about auto chapters here.](https://www.assemblyai.com/docs/speech-understanding/auto-chapters)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003eSummarize the content of a transcript\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\naai.settings.api_key = \"\u003cYOUR_API_KEY\u003e\"\n\n# audio_file = \"./local_file.mp3\"\naudio_file = \"https://assembly.ai/wildfires.mp3\"\n\nconfig = aai.TranscriptionConfig(\n  speech_models=[\"universal-3-pro\", \"universal-2\"],\n  language_detection=True,\n  summarization=True,\n  summary_model=aai.SummarizationModel.informative,\n  summary_type=aai.SummarizationType.bullets\n)\n\ntranscript = aai.Transcriber().transcribe(audio_file, config)\n\nprint(f\"Transcript ID: \", transcript.id)\nprint(transcript.summary)\n```\n\nBy default, the summarization model will be `informative` and the summarization type will be `bullets`. [Read more about summarization models and types here](https://www.assemblyai.com/docs/speech-understanding/summarization).\n\nTo change the model and/or type, pass additional parameters to the `TranscriptionConfig`:\n\n```python\nconfig=aai.TranscriptionConfig(\n  summarization=True,\n  summary_model=aai.SummarizationModel.catchy,\n  summary_type=aai.SummarizationType.headline\n)\n```\n\n\u003c/details\u003e\n\u003cdetails\u003e\n  \u003csummary\u003eDetect sensitive content in a transcript\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\naai.settings.api_key = \"\u003cYOUR_API_KEY\u003e\"\n\n# audio_file = \"./local_file.mp3\"\naudio_file = \"https://assembly.ai/wildfires.mp3\"\n\nconfig = aai.TranscriptionConfig(\n    speech_models=[\"universal-3-pro\", \"universal-2\"],\n    language_detection=True,\n    content_safety=True\n)\n\ntranscript = aai.Transcriber().transcribe(audio_file, config)\n\nprint(f\"Transcript ID:\", transcript.id)\n\nfor result in transcript.content_safety.results:\n    print(result.text)\n    print(f\"Timestamp: {result.timestamp.start} - {result.timestamp.end}\")\n\n    # Get category, confidence, and severity.\n    for label in result.labels:\n        print(f\"{label.label} - {label.confidence} - {label.severity}\")  # content safety category\n\n# Get the confidence of the most common labels in relation to the entire audio file.\nfor label, confidence in transcript.content_safety.summary.items():\n    print(f\"{confidence * 100}% confident that the audio contains {label}\")\n\n# Get the overall severity of the most common labels in relation to the entire audio file.\nfor label, severity_confidence in transcript.content_safety.severity_score_summary.items():\n    print(f\"{severity_confidence.low * 100}% confident that the audio contains low-severity {label}\")\n    print(f\"{severity_confidence.medium * 100}% confident that the audio contains medium-severity {label}\")\n    print(f\"{severity_confidence.high * 100}% confident that the audio contains high-severity {label}\")\n```\n\n[Read more about the content safety categories.](https://www.assemblyai.com/docs/content-moderation)\n\nBy default, the content safety model will only include labels with a confidence greater than 0.5 (50%). To change this, pass `content_safety_confidence` (as an integer percentage between 25 and 100, inclusive) to the `TranscriptionConfig`:\n\n```python\nconfig=aai.TranscriptionConfig(\n  content_safety=True,\n  content_safety_confidence=80,  # only include labels with a confidence greater than 80%\n)\n```\n\n\u003c/details\u003e\n\u003cdetails\u003e\n  \u003csummary\u003eAnalyze the sentiment of sentences in a transcript\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\naai.settings.api_key = \"\u003cYOUR_API_KEY\u003e\"\n\n# audio_file = \"./local_file.mp3\"\naudio_file = \"https://assembly.ai/wildfires.mp3\"\n\nconfig = aai.TranscriptionConfig(\n    speech_models=[\"universal-3-pro\", \"universal-2\"],\n    language_detection=True,\n    sentiment_analysis=True\n)\n\ntranscript = aai.Transcriber().transcribe(audio_file, config)\nprint(f\"Transcript ID:\", transcript.id)\n\nfor sentiment_result in transcript.sentiment_analysis:\n    print(sentiment_result.text)\n    print(sentiment_result.sentiment)  # POSITIVE, NEUTRAL, or NEGATIVE\n    print(sentiment_result.confidence)\n    print(f\"Timestamp: {sentiment_result.start} - {sentiment_result.end}\")\n```\n\nIf `speaker_labels` is also enabled, then each sentiment analysis result will also include a `speaker` field.\n\n```python\n# ...\n\nconfig = aai.TranscriptionConfig(sentiment_analysis=True, speaker_labels=True)\n\n# ...\n\nfor sentiment_result in transcript.sentiment_analysis:\n  print(sentiment_result.speaker)\n```\n\n[Read more about sentiment analysis here.](https://www.assemblyai.com/docs/speech-understanding/sentiment-analysis)\n\n\u003c/details\u003e\n\u003cdetails\u003e\n  \u003csummary\u003eIdentify entities in a transcript\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\naai.settings.api_key = \"\u003cYOUR_API_KEY\u003e\"\n\n# audio_file = \"./local_file.mp3\"\naudio_file = \"https://assembly.ai/wildfires.mp3\"\n\nconfig = aai.TranscriptionConfig(\n    speech_models=[\"universal-3-pro\", \"universal-2\"],\n    language_detection=True,\n    entity_detection=True\n)\n\ntranscript = aai.Transcriber().transcribe(audio_file, config)\nprint(f\"Transcript ID:\", transcript.id)\n\nfor entity in transcript.entities:\n    print(entity.text)\n    print(entity.entity_type)\n    print(f\"Timestamp: {entity.start} - {entity.end}\\n\")\n```\n\n[Read more about entity detection here.](https://www.assemblyai.com/docs/speech-understanding/entity-detection)\n\n\u003c/details\u003e\n\u003cdetails\u003e\n  \u003csummary\u003eDetect topics in a transcript (IAB Classification)\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\naai.settings.api_key = \"\u003cYOUR_API_KEY\u003e\"\n\n# audio_file = \"./local_file.mp3\"\naudio_file = \"https://assembly.ai/wildfires.mp3\"\n\nconfig = aai.TranscriptionConfig(\n    speech_models=[\"universal-3-pro\", \"universal-2\"],\n    language_detection=True,\n    iab_categories=True\n)\n\ntranscript = aai.Transcriber().transcribe(audio_file, config)\nprint(f\"Transcript ID:\", transcript.id)\n\n# Get the parts of the transcript that were tagged with topics\nfor result in transcript.iab_categories.results:\n    print(result.text)\n    print(f\"Timestamp: {result.timestamp.start} - {result.timestamp.end}\")\n    for label in result.labels:\n        print(f\"{label.label} ({label.relevance})\")\n\n# Get a summary of all topics in the transcript\nfor topic, relevance in transcript.iab_categories.summary.items():\n    print(f\"Audio is {relevance * 100}% relevant to {topic}\")\n```\n\n[Read more about IAB classification here.](https://www.assemblyai.com/docs/speech-understanding/topic-detection)\n\n\u003c/details\u003e\n\u003cdetails\u003e\n  \u003csummary\u003eIdentify important words and phrases in a transcript\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\naai.settings.api_key = \"\u003cYOUR_API_KEY\u003e\"\n\n# audio_file = \"./local_file.mp3\"\naudio_file = \"https://assembly.ai/wildfires.mp3\"\n\nconfig = aai.TranscriptionConfig(\n    speech_models=[\"universal-3-pro\", \"universal-2\"],\n    language_detection=True,\n    auto_highlights=True\n)\n\ntranscript = aai.Transcriber().transcribe(audio_file, config)\nprint(f\"Transcript ID:\", transcript.id)\n\nfor result in transcript.auto_highlights.results:\n    print(f\"Highlight: {result.text}, Count: {result.count}, Rank: {result.rank}, Timestamps: {result.timestamps}\")\n```\n\n[Read more about auto highlights here.](https://www.assemblyai.com/docs/speech-understanding/key-phrases)\n\n\u003c/details\u003e\n\n---\n\n### **Streaming Examples**\n\n[Read more about our streaming service.](https://www.assemblyai.com/docs/streaming/universal-3-pro)\n\n\u003cdetails\u003e\n  \u003csummary\u003eStream your microphone in real-time\u003c/summary\u003e\n\n```bash\npip install -U assemblyai\n```\n\n```python\nimport logging\nfrom typing import Type\n\nimport assemblyai as aai\nfrom assemblyai.streaming.v3 import (\n    BeginEvent,\n    StreamingClient,\n    StreamingClientOptions,\n    StreamingError,\n    StreamingEvents,\n    StreamingParameters,\n    TurnEvent,\n    TerminationEvent,\n)\n\napi_key = \"\u003cYOUR_API_KEY\u003e\"\n\nlogging.basicConfig(level=logging.INFO)\nlogger = logging.getLogger(__name__)\n\ndef on_begin(self: Type[StreamingClient], event: BeginEvent):\n    print(f\"Session started: {event.id}\")\n\ndef on_turn(self: Type[StreamingClient], event: TurnEvent):\n    print(f\"{event.transcript} ({event.end_of_turn})\")\n\ndef on_terminated(self: Type[StreamingClient], event: TerminationEvent):\n    print(\n        f\"Session terminated: {event.audio_duration_seconds} seconds of audio processed\"\n    )\n\ndef on_error(self: Type[StreamingClient], error: StreamingError):\n    print(f\"Error occurred: {error}\")\n\ndef main():\n    client = StreamingClient(\n        StreamingClientOptions(\n            api_key=api_key,\n            api_host=\"streaming.assemblyai.com\",\n        )\n    )\n\n    client.on(StreamingEvents.Begin, on_begin)\n    client.on(StreamingEvents.Turn, on_turn)\n    client.on(StreamingEvents.Termination, on_terminated)\n    client.on(StreamingEvents.Error, on_error)\n\n    client.connect(\n        StreamingParameters(\n            sample_rate=16000,\n            speech_model=\"u3-rt-pro\",\n        )\n    )\n\n    try:\n        client.stream(\n            aai.extras.MicrophoneStream(sample_rate=16000)\n        )\n    finally:\n        client.disconnect(terminate=True)\n\nif __name__ == \"__main__\":\n    main()\n```\n\n\u003c/details\u003e\n\n---\n\n### **Change the default settings**\n\nYou'll find the `Settings` class with all default values in [types.py](./assemblyai/types.py).\n\n\u003cdetails\u003e\n  \u003csummary\u003eChange the default timeout and polling interval\u003c/summary\u003e\n\n```python\nimport assemblyai as aai\n\naai.settings.base_url = \"https://api.assemblyai.com\"\naai.settings.api_key = \"YOUR_API_KEY\"\n\n# The HTTP timeout in seconds for general requests, default is 30.0\naai.settings.http_timeout = 60.0\n\n# The polling interval in seconds for long-running requests, default is 3.0\naai.settings.polling_interval = 10.0\n```\n\n\u003c/details\u003e\n\n---\n\n## Playground\n\nVisit our Playground to try our all of our Speech AI models and LeMUR for free:\n\n- [Playground](https://www.assemblyai.com/dashboard/playground/)\n\n# Advanced\n\n## How the SDK handles Default Configurations\n\n### Defining Defaults\n\nWhen no `TranscriptionConfig` is being passed to the `Transcriber` or its methods, it will use a default instance of a `TranscriptionConfig`.\n\nIf you would like to re-use the same `TranscriptionConfig` for all your transcriptions,\nyou can set it on the `Transcriber` directly:\n\n```python\nconfig = aai.TranscriptionConfig(punctuate=False, format_text=False)\n\ntranscriber = aai.Transcriber(config=config)\n\n# will use the same config for all `.transcribe*(...)` operations\ntranscriber.transcribe(\"https://example.org/audio.wav\")\n```\n\n### Overriding Defaults\n\nYou can override the default configuration later via the `.config` property of the `Transcriber`:\n\n```python\ntranscriber = aai.Transcriber()\n\n# override the `Transcriber`'s config with a new config\ntranscriber.config = aai.TranscriptionConfig(punctuate=False, format_text=False)\n```\n\nIn case you want to override the `Transcriber`'s configuration for a specific operation with a different one, you can do so via the `config` parameter of a `.transcribe*(...)` method:\n\n```python\nconfig = aai.TranscriptionConfig(punctuate=False, format_text=False)\n# set a default configuration\ntranscriber = aai.Transcriber(config=config)\n\ntranscriber.transcribe(\n    \"https://example.com/audio.mp3\",\n    # overrides the above configuration on the `Transcriber` with the following\n    config=aai.TranscriptionConfig(speech_models=[\"universal-3-pro\", \"universal-2\"], multichannel=True, disfluencies=True)\n)\n```\n\n## Synchronous vs Asynchronous\n\nCurrently, the SDK provides two ways to transcribe audio files.\n\nThe synchronous approach halts the application's flow until the transcription has been completed.\n\nThe asynchronous approach allows the application to continue running while the transcription is being processed. The caller receives a [`concurrent.futures.Future`](https://docs.python.org/3/library/concurrent.futures.html) object which can be used to check the status of the transcription at a later time.\n\nYou can identify those two approaches by the `_async` suffix in the `Transcriber`'s method name (e.g. `transcribe` vs `transcribe_async`).\n\n## Getting the HTTP status code\n\nThere are two ways of accessing the HTTP status code:\n\n- All custom AssemblyAI Error classes have a `status_code` attribute.\n- The latest HTTP response is stored in `aai.Client.get_default().latest_response` after every API call. This approach works also if no Exception is thrown.\n\n```python\ntranscriber = aai.Transcriber()\n\n# Option 1: Catch the error\ntry:\n    transcript = transcriber.submit(\"./example.mp3\")\nexcept aai.AssemblyAIError as e:\n    print(e.status_code)\n\n# Option 2: Access the latest response through the client\nclient = aai.Client.get_default()\n\ntry:\n    transcript = transcriber.submit(\"./example.mp3\")\nexcept:\n    print(client.last_response)\n    print(client.last_response.status_code)\n```\n\n## Polling Intervals\n\nBy default we poll the `Transcript`'s status each `3s`. In case you would like to adjust that interval:\n\n```python\nimport assemblyai as aai\n\naai.settings.base_url = \"https://api.assemblyai.com\"\naai.settings.api_key = \"YOUR_API_KEY\"\n\naai.settings.polling_interval = 1.0\n```\n\n## Retrieving Existing Transcripts\n\n### Retrieving a Single Transcript\n\nIf you previously created a transcript, you can use its ID to retrieve it later.\n\n```python\nimport assemblyai as aai\n\naai.settings.base_url = \"https://api.assemblyai.com\"\naai.settings.api_key = \"YOUR_API_KEY\"\n\ntranscript = aai.Transcript.get_by_id(\"\u003cTRANSCRIPT_ID\u003e\")\n\nprint(transcript.id)\nprint(transcript.text)\n```\n\n### Retrieving Multiple Transcripts as a Group\n\nYou can also retrieve multiple existing transcripts and combine them into a single `TranscriptGroup` object. This allows you to perform operations on the transcript group as a single unit.\n\n```python\nimport assemblyai as aai\n\naai.settings.base_url = \"https://api.assemblyai.com\"\naai.settings.api_key = \"YOUR_API_KEY\"\n\ntranscript_group = aai.TranscriptGroup.get_by_ids([\"\u003cTRANSCRIPT_ID_1\u003e\", \"\u003cTRANSCRIPT_ID_2\u003e\"])\n\n```\n\n### Retrieving Transcripts Asynchronously\n\nBoth `Transcript.get_by_id` and `TranscriptGroup.get_by_ids` have asynchronous counterparts, `Transcript.get_by_id_async` and `TranscriptGroup.get_by_ids_async`, respectively. These functions immediately return a `Future` object, rather than blocking until the transcript(s) are retrieved.\n\nSee the above section on [Synchronous vs Asynchronous](#synchronous-vs-asynchronous) for more information.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fassemblyai%2Fassemblyai-python-sdk","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fassemblyai%2Fassemblyai-python-sdk","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fassemblyai%2Fassemblyai-python-sdk/lists"}