{"id":47566450,"url":"https://github.com/aws-samples/sample-badgers","last_synced_at":"2026-04-14T01:00:35.703Z","repository":{"id":337759825,"uuid":"1148963395","full_name":"aws-samples/sample-badgers","owner":"aws-samples","description":"Guidance on deploying a generative AI document analysis with Amazon Bedrock AgentCore. Auto-classifies, enhances, and aggregates multi-type documents using Gestalt-informed vision prompts. Custom analyzer creation wizard. Scripted CDK deployment. Gradio frontend included.","archived":false,"fork":false,"pushed_at":"2026-04-03T13:53:36.000Z","size":62257,"stargazers_count":12,"open_issues_count":5,"forks_count":1,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-03T14:09:45.968Z","etag":null,"topics":["agentcore","agentcore-sdk","agentic-ai","agentic-workflow","amazon-nova","badgers","cdk","claude","composable-prompts","document-extraction","document-intelligence","document-vision","full-text-extraction","gestalt","prompt-engineering","strands-agent-sdk","strands-agents","vision-models"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/aws-samples.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":"CODEOWNERS","security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":"NOTICE","maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-02-03T15:12:21.000Z","updated_at":"2026-04-03T13:53:32.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/aws-samples/sample-badgers","commit_stats":null,"previous_names":["aws-samples/sample-badgers"],"tags_count":0,"template":false,"template_full_name":"amazon-archives/__template_Apache-2.0","purl":"pkg:github/aws-samples/sample-badgers","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aws-samples%2Fsample-badgers","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aws-samples%2Fsample-badgers/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aws-samples%2Fsample-badgers/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aws-samples%2Fsample-badgers/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/aws-samples","download_url":"https://codeload.github.com/aws-samples/sample-badgers/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aws-samples%2Fsample-badgers/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31777348,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-14T00:11:49.126Z","status":"ssl_error","status_checked_at":"2026-04-14T00:10:29.837Z","response_time":93,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agentcore","agentcore-sdk","agentic-ai","agentic-workflow","amazon-nova","badgers","cdk","claude","composable-prompts","document-extraction","document-intelligence","document-vision","full-text-extraction","gestalt","prompt-engineering","strands-agent-sdk","strands-agents","vision-models"],"created_at":"2026-03-30T06:00:24.600Z","updated_at":"2026-04-14T01:00:35.693Z","avatar_url":"https://github.com/aws-samples.png","language":"Python","readme":"\u003e 🚧 **This repository is under active development.** Watch the repo, monitor branches and issues, and check the [Changelog](CHANGELOG.md) for the latest updates.\n\n\u003csub\u003e🧭 **Navigation:**\u003c/sub\u003e\u003cbr\u003e\n\u003csub\u003e🔵 **Home** | [Vision LLM Theory](VISION_LLM_THEORY_README.md) | [Local Testing](local_testing/LOCAL_TESTING_README.md) | [Deployment UI](deployment/ui/DEPLOYMENT_UI_README.md) | [Deployment](deployment/DEPLOYMENT_README.md) | [CDK Stacks](deployment/stacks/STACKS_README.md) | [Runtime](deployment/runtime/RUNTIME_README.md) | [S3 Files](deployment/s3_files/S3_FILES_README.md) | [Lambda Analyzers](deployment/lambdas/LAMBDA_ANALYZERS.md) | [Prompting System](deployment/s3_files/prompts/PROMPTING_SYSTEM_README.md)\u003c/sub\u003e\n\n---\n\n# 🦡 BADGERS\n\n**Broad Agentic Document Generative Extraction \u0026 Recognition System**\n\nBADGERS transforms document processing through vision-enabled AI and deep layout analysis. Unlike traditional text extraction tools, BADGERS understands document structure and meaning by recognizing visual hierarchies, reading patterns, and contextual relationships between elements.\n\n## 🤔 Why BADGERS?\n\nTraditional document processing tools extract text but lose context. They can't distinguish a header from body text, understand table relationships, or recognize that a diagram explains the adjacent paragraph. BADGERS solves this by:\n\n- 🏗️ **Preserving semantic structure** - Maintains document hierarchy and element relationships\n- 👁️ **Understanding visual context** - Recognizes how layout conveys meaning\n- 📚 **Processing diverse content** - Handles 21+ element types from handwriting to equations\n- 🤖 **Automating complex workflows** - Orchestrates multiple specialized analyzers via an AI agent\n\nUse cases: research acceleration, compliance automation, content management, accessibility remediation.\n\n## 📸 Screenshots\n\n| Local Testing — Home                                               | Local Testing — Chat                                               |\n| ------------------------------------------------------------------ | ------------------------------------------------------------------ |\n| ![Home Page](github-assets/badgers-local-testing-ui-home-page.png) | ![Chat Interface](github-assets/badgers-local-testing-ui-chat.png) |\n\n| Deployment UI — Stacks                                                      | Deployment UI — Config Editor                                       |\n| --------------------------------------------------------------------------- | ------------------------------------------------------------------- |\n| ![Stack Information](github-assets/badgers-deploy-ui-stack-information.png) | ![Config Editor](github-assets/badgers-deploy-ui-config-editor.png) |\n\n## ⚙️ How It Works\n\n```\n┌─────────────────────────────────────────────────────────────────────────────┐\n│                           AgentCore Runtime                                 │\n│   ┌─────────────────────────────────────────────────────────────────────┐   │\n│   │  PDF Analysis Agent (Strands)                                       │   │\n│   │  - Claude Sonnet 4.5 with Extended Thinking                         │   │\n│   │  - Session state management                                         │   │\n│   │  - MCP tool orchestration                                           │   │\n│   └─────────────────────────────────────────────────────────────────────┘   │\n└─────────────────────────────────────────────────────────────────────────────┘\n                                      │\n                                      ▼\n┌─────────────────────────────────────────────────────────────────────────────┐\n│                           AgentCore Gateway                                 │\n│   - MCP Protocol (2025-03-26)                                               │\n│   - Cognito JWT Authentication                                              │\n│   - Semantic tool search                                                    │\n└─────────────────────────────────────────────────────────────────────────────┘\n                                      │\n                   ┌──────────────────┼──────────────────┐\n                   │                  │                  │\n                   ▼                  ▼                  ▼\n            ┌─────────────┐    ┌─────────────┐    ┌─────────────┐\n            │   Lambda    │    │   Lambda    │    │   Lambda    │\n            │  Analyzer   │    │  Analyzer   │    │  Analyzer   │\n            │ (25 tools)  │    │             │    │             │\n            └─────────────┘    └─────────────┘    └─────────────┘\n                   │                  │                  │\n                   └──────────────────┼──────────────────┘\n                                      ▼\n                               ┌─────────────┐\n                               │   Bedrock   │\n                               │   Claude    │\n                               └─────────────┘\n```\n\n1. 📄 **User submits a document** with analysis instructions\n2. 🧠 **Strands Agent** (running in AgentCore Runtime) interprets the request\n3. 🔧 **Agent selects tools** from a library of specialized analyzers via MCP Gateway\n4. ⚡ **Lambda analyzers** (standardized and domain-specific functions, including container-based) process document elements using Claude vision models\n5. 📊 **Results aggregate** with preserved structure and semantic relationships\n\n## 🛠️ Tech Stack\n\n| Component          | Technology                                                         |\n| ------------------ | ------------------------------------------------------------------ |\n| 🤖 Agent Framework  | [Strands Agents](https://github.com/strands-agents/strands-agents) |\n| 🏠 Agent Hosting    | Amazon Bedrock AgentCore Runtime                                   |\n| 🚪 Tool Gateway     | Amazon Bedrock AgentCore Gateway (MCP Protocol)                    |\n| 🧠 Foundation Model | Claude Sonnet 4.5 (via Amazon Bedrock)                             |\n| ⚡ Compute          | AWS Lambda (modular analyzer functions, including container-based) |\n| 📦 Storage          | Amazon S3 (configs, prompts, outputs)                              |\n| 🔐 Auth             | Amazon Cognito (OAuth 2.0 client credentials)                      |\n| 🏗️ IaC              | AWS CDK (Python)                                                   |\n| 📈 Observability    | CloudWatch Logs, X-Ray                                             |\n| 📊 Cost Tracking    | Bedrock Application Inference Profiles                             |\n\n## 🔬 Analyzers\n\n| Analyzer                             | Purpose                                                                                    |\n| ------------------------------------ | ------------------------------------------------------------------------------------------ |\n| 📸 `pdf_to_images_converter`          | Convert PDF pages to images                                                                |\n| 🏷️ `classify_pdf_content`             | Classify document content type                                                             |\n| 📝 `full_text_analyzer`               | Extract all text content                                                                   |\n| 📊 `table_analyzer`                   | Extract and structure tables                                                               |\n| 📈 `charts_analyzer`                  | Analyze charts and graphs                                                                  |\n| 🔀 `diagram_analyzer`                 | Process diagrams and flowcharts                                                            |\n| 📐 `layout_analyzer`                  | Document structure analysis                                                                |\n| ♿ `accessibility_analyzer`           | Generate accessibility metadata (part of remediation)                                      |\n| 🏥 `decision_tree_analyzer`           | Medical/clinical document analysis                                                         |\n| 🔬 `scientific_analyzer`              | Scientific paper analysis                                                                  |\n| ✍️ `handwriting_analyzer`             | Handwritten text recognition                                                               |\n| 💻 `code_block_analyzer`              | Extract code snippets                                                                      |\n| 🗂️ `metadata_generic_analyzer`        | Generic metadata extraction                                                                |\n| 🗂️ `metadata_mads_analyzer`           | MADS metadata format extraction                                                            |\n| 🗂️ `metadata_mods_analyzer`           | MODS metadata format extraction                                                            |\n| 🔑 `keyword_topic_analyzer`           | Extract keywords and topics                                                                |\n| 🔧 `remediation_analyzer`             | PDF accessibility remediation (container, content stream tagging + structure tree builder) |\n| 📄 `page_analyzer`                    | Single page content analysis                                                               |\n| 🧱 `elements_analyzer`                | Document element detection                                                                 |\n| 🧱 `robust_elements_analyzer`         | Enhanced element detection with fallbacks                                                  |\n| 👁️ `general_visual_analysis_analyzer` | General-purpose visual content analysis                                                    |\n| ✏️ `editorial_analyzer`               | Editorial content and markup analysis                                                      |\n| 🗺️ `war_map_analyzer`                 | Historical war map analysis                                                                |\n| 🎓 `edu_transcript_analyzer`          | Educational transcript analysis                                                            |\n| 🔗 `correlation_analyzer`             | Correlate multi-analyzer results per page                                                  |\n| 🖼️ `image_enhancer`                   | Image enhancement and preprocessing                                                        |\n\n## 🚀 Deployment\n\n### Prerequisites\n\n- ☁️ [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) configured with credentials\n- 📦 [AWS CDK v2](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html) (`npm install -g aws-cdk`)\n- 🐳 [Docker](https://docs.docker.com/get-started/get-docker/) (running)\n- 🐍 [Python 3.12+](https://www.python.org/downloads/)\n- ⚡ [uv](https://docs.astral.sh/uv/getting-started/installation/)\n\n### Quick Start\n\n```bash\ncd deployment\n./deploy_from_scratch.sh\n```\n\nThis deploys 10 CloudFormation stacks:\n1. 📦 S3 (config + output buckets)\n2. 🔐 Cognito (OAuth authentication)\n3. 👤 IAM (execution roles)\n4. 🐳 ECR (container registry)\n5. ⚡ Lambda (25 analyzer functions)\n6. 🚪 Gateway (MCP endpoint)\n7. 🧠 Memory (session persistence)\n8. 📊 Inference Profiles (cost tracking)\n9. 🏃 Runtime (Strands agent container)\n10. 🧩 Custom Analyzers (optional, wizard-created)\n\n### Manual Steps\n\nSee [deployment/DEPLOYMENT_README.md](deployment/DEPLOYMENT_README.md) for step-by-step instructions.\n\n### Cleanup\n\n```bash\ncd deployment\n./destroy.sh\n```\n\n## 📁 Project Structure\n\n```\n├── deployment/\n│   ├── app.py                 # CDK app entry point\n│   ├── stacks/                # CDK stack definitions\n│   ├── lambdas/code/          # Analyzer Lambda functions\n│   ├── runtime/               # AgentCore Runtime container\n│   ├── s3_files/              # Prompts, schemas, manifests\n│   └── badgers-foundation/    # Shared analyzer framework\n├── local_testing/             # Local dev/testing UI (React + Express)\n│   ├── src/                   # React components (chat, wizard, editor, pricing, etc.)\n│   └── server/                # Express API server\n└── pyproject.toml\n```\n\n---\n\n## 🔍 Technical Deep Dive\n\n### 📦 Lambda Layers\n\nBADGERS uses Lambda layers shared across analyzer functions:\n\n**🏗️ Foundation Layer** (`layer.zip`)\n- Built via `deployment/lambdas/build_foundation_layer.sh`\n- Contains the analyzer framework (7 Python modules)\n- Includes dependencies: boto3, botocore\n- Includes core system prompts used by all analyzers\n\n```\nlayer/python/\n├── foundation/\n│   ├── analyzer_foundation.py    # 🎯 Main orchestration class\n│   ├── bedrock_client.py         # 🔄 Bedrock API with retry/fallback\n│   ├── configuration_manager.py  # ⚙️ Config loading/validation\n│   ├── image_processor.py        # 🖼️ Image optimization\n│   ├── message_chain_builder.py  # 💬 Claude message formatting\n│   ├── prompt_loader.py          # 📜 Prompt file loading (local/S3)\n│   └── response_processor.py     # 📤 Response extraction\n├── config/\n│   └── config.py\n└── prompts/core_system_prompts/\n    └── *.xml\n```\n\n**📄 Poppler Layer** (`poppler-qpdf-layer.zip`)\n- PDF rendering library for `pdf_to_images_converter`\n- Built via `deployment/lambdas/build_poppler_qdf_layer.sh`\n\n### 🔬 How an Analyzer Works\n\nEach analyzer follows the same pattern using `AnalyzerFoundation`:\n\n```python\n# Lambda handler (simplified)\ndef lambda_handler(event, context):\n    # 1️⃣ Load config from S3 manifest\n    config = load_manifest_from_s3(bucket, \"full_text_analyzer\")\n\n    # 2️⃣ Initialize foundation with S3-aware prompt loader\n    analyzer = AnalyzerFoundation(...)\n\n    # 3️⃣ Run analysis pipeline\n    result = analyzer.analyze(image_data)\n\n    # 4️⃣ Save result to S3 and return\n    save_result_to_s3(result, session_id)\n    return {\"result\": result}\n```\n\nThe `analyze()` method orchestrates:\n1. 🖼️ **Image processing** - Resize/optimize for Claude's vision API\n2. 📜 **Prompt loading** - Combine wrapper + analyzer prompts from S3\n3. 💬 **Message building** - Format for Bedrock Converse API\n4. ⚡ **Dynamic token estimation** - Score image complexity and set token budget (when enabled)\n5. 🤖 **Model invocation** - Call Claude with retry/fallback logic\n6. ✅ **Response processing** - Extract and validate result\n\n### 📜 Prompting System\n\nPrompts are modular XML files composed at runtime:\n\n```\ns3://config-bucket/\n├── core_system_prompts/\n│   ├── prompt_system_wrapper.xml   # 🎁 Main template with placeholders\n│   ├── core_rules/rules.xml        # 📏 Shared rules for all analyzers\n│   └── error_handling/*.xml        # ⚠️ Error response templates\n├── prompts/{analyzer_name}/\n│   ├── {analyzer}_job_role.xml     # 👤 Role definition\n│   ├── {analyzer}_context.xml      # 🌍 Domain context\n│   ├── {analyzer}_rules.xml        # 📏 Analyzer-specific rules\n│   ├── {analyzer}_tasks.xml        # ✅ Task instructions\n│   └── {analyzer}_format.xml       # 📋 Output format spec\n└── wrappers/\n    └── prompt_system_wrapper.xml\n```\n\nThe `PromptLoader` composes the final system prompt:\n\n```xml\n\u003c!-- prompt_system_wrapper.xml --\u003e\n\u003csystem_prompt\u003e\n    {core_rules}           \u003c!-- 📏 Injected from core_rules/rules.xml --\u003e\n    {composed_prompt}      \u003c!-- 🧩 Injected from analyzer prompt files --\u003e\n    {error_handler_general}\n    {error_handler_not_found}\n\u003c/system_prompt\u003e\n```\n\nPlaceholders like `[[PIXEL_WIDTH]]` and `[[PIXEL_HEIGHT]]` are replaced with actual image dimensions at runtime.\n\n### ⚙️ Configuration System\n\nEach analyzer has a manifest file in S3:\n\n```json\n// s3://config-bucket/manifests/full_text_analyzer.json\n{\n    \"tool\": {\n        \"name\": \"analyze_full_text_tool\",\n        \"description\": \"Extracts text content maintaining reading order...\",\n        \"inputSchema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"image_path\": { \"type\": \"string\" },\n                \"session_id\": { \"type\": \"string\" },\n                \"audit_mode\": { \"type\": \"boolean\" }\n            },\n            \"required\": [\"image_path\", \"session_id\"]\n        }\n    },\n    \"analyzer\": {\n        \"name\": \"full_text_analyzer\",\n        \"enhancement_eligible\": true,\n        \"model_selections\": {\n            \"primary\": \"us.anthropic.claude-sonnet-4-5-20250929-v1:0\",\n            \"fallback_list\": [\n                \"us.anthropic.claude-haiku-4-5-20251001-v1:0\",\n                \"us.amazon.nova-premier-v1:0\"\n            ]\n        },\n        \"max_retries\": 3,\n        \"prompt_files\": [\n            \"full_text_job_role.xml\",\n            \"full_text_context.xml\",\n            \"full_text_rules.xml\",\n            \"full_text_tasks_extraction.xml\",\n            \"full_text_format.xml\"\n        ],\n        \"max_examples\": 0,\n        \"analysis_text\": \"full text content\",\n        \"expected_output_tokens\": 6000,\n        \"output_extension\": \"xml\"\n    }\n}\n```\n\nKey configuration features:\n- 🔄 **Model fallback chain** - Primary model with ordered fallbacks\n- 🔁 **Retry logic** - Configurable retry count per analyzer\n- 🧩 **Prompt composition** - List of XML files to combine\n- 📋 **Tool schema** - MCP-compatible input schema for Gateway\n- 🖼️ **Enhancement eligible** - Flag indicating analyzer benefits from image preprocessing (used by `image_enhancer` tool)\n\nGlobal settings (from environment or defaults):\n```python\n{\n    \"max_tokens\": 8000,\n    \"temperature\": 0.1,\n    \"max_image_size\": 20971520,  # 20MB\n    \"max_dimension\": 2048,\n    \"jpeg_quality\": 85,\n    \"throttle_delay\": 1.0,\n    \"aws_region\": \"us-west-2\"\n}\n```\n\n### ⚡ Dynamic Token Estimation\n\nWhen enabled, BADGERS estimates the optimal `max_tokens` per image based on visual complexity, reducing cost on simple documents and avoiding truncation on dense ones. The scorer runs on the already-processed image bytes — no extra I/O.\n\nFour metrics are combined into a complexity score: text pixel ratio, grayscale entropy, edge density, and color standard deviation. The score maps to a token budget (8K / 12K / 16K / 24K).\n\n**Enabling:** Toggle \"Dynamic Token Estimation\" in the chat UI, or set the Lambda environment variable `DYNAMIC_TOKENS_ENABLED=true`.\n\n**Tuning:** Add a `dynamic_tokens` block to an analyzer manifest to customize weights and thresholds:\n```json\n\"dynamic_tokens\": {\n    \"weights\": {\n        \"text_ratio\": 0.2,\n        \"entropy\": 0.3,\n        \"edge_density\": 0.3,\n        \"color_std\": 0.2\n    },\n    \"thresholds\": [\n        {\"max_score\": 0.20, \"max_tokens\": 8000},\n        {\"max_score\": 0.30, \"max_tokens\": 12000},\n        {\"max_score\": 0.45, \"max_tokens\": 16000},\n        {\"max_score\": 1.00, \"max_tokens\": 24000}\n    ]\n}\n```\n\n**Observability:** When active, logs report the estimated budget, actual token usage, and utilization percentage for calibration.\n\n### 📊 Inference Profiles for Cost Tracking\n\nBADGERS uses Application Inference Profiles to enable cost allocation and usage monitoring. The system maps model IDs to profile ARNs at runtime:\n\n```\n┌─────────────────────────────────────────────────────────────────────────────┐\n│                        Inference Profile Flow                               │\n├─────────────────────────────────────────────────────────────────────────────┤\n│                                                                             │\n│  1. CDK deploys InferenceProfilesStack                                      │\n│     └─\u003e Creates ApplicationInferenceProfile for each model                  │\n│         • badgers-claude-sonnet-{id}  (US)                               │\n│         • badgers-claude-haiku-{id}   (US)                               │\n│         • badgers-claude-opus-{id}    (US)                               │\n│         • badgers-nova-premier-{id}   (US)                               │\n│                                                                             │\n│  2. Runtime receives profile ARNs as environment variables                  │\n│     └─\u003e CLAUDE_SONNET_PROFILE_ARN, CLAUDE_HAIKU_PROFILE_ARN, etc.           │\n│                                                                             │\n│  3. At invocation, bedrock_client.py maps model_id → profile ARN            │\n│     └─\u003e \"us.anthropic.claude-sonnet-4-5-*\" → $CLAUDE_SONNET_PROFILE_ARN    │\n│                                                                             │\n│  4. Bedrock invoked with profile ARN (enables cost tracking)                │\n│     └─\u003e Falls back to model ID if no profile configured                     │\n│                                                                             │\n└─────────────────────────────────────────────────────────────────────────────┘\n```\n\nModel ID to environment variable mapping:\n| Model Pattern         | Environment Variable        |\n| --------------------- | --------------------------- |\n| `*claude-sonnet-4-5*` | `CLAUDE_SONNET_PROFILE_ARN` |\n| `*claude-haiku-4-5*`  | `CLAUDE_HAIKU_PROFILE_ARN`  |\n| `*claude-opus-4-6*`   | `CLAUDE_OPUS_PROFILE_ARN`   |\n| `*nova-premier*`      | `NOVA_PREMIER_PROFILE_ARN`  |\n\n### ➕ Adding a New Analyzer\n\n**Option 1: Use the Wizard (Recommended)**\n\n```bash\ncd local_testing\nnpm run dev\n```\n\nThe Analyzer Creation Wizard is available as the 🧙 Create Analyzer tab in the [Local Testing UI](local_testing/LOCAL_TESTING_README.md).\n\n**Option 2: Manual Creation**\n\n1. 📜 Create prompt files in `deployment/s3_files/prompts/{analyzer_name}/`\n2. 📋 Create manifest in `deployment/s3_files/manifests/{analyzer_name}.json`\n3. 📐 Create schema in `deployment/s3_files/schemas/{analyzer_name}.json`\n4. ⚡ Create Lambda code in `deployment/lambdas/code/{analyzer_name}/lambda_handler.py`\n5. 📝 Register in `deployment/stacks/lambda_stack.py`\n6. 🚀 Redeploy: `cdk deploy badgers-lambda badgers-gateway`\n\n---\n\n## 🔧 Troubleshooting\n\n### Service Control Policy (SCP) Blocks Cross-Region Inference\n\nIf your AWS organization uses strict SCPs that deny cross-region Bedrock operations, you may see:\n\n```\nAccessDeniedException: ... is not authorized to perform: bedrock:InvokeModelWithResponseStream\non resource: arn:aws:bedrock:::foundation-model/anthropic.claude-* with an explicit deny\nin a service control policy\n```\n\nBADGERS defaults to regional (`us.anthropic.*`) inference profiles which avoid cross-region routing. If you previously deployed with `global.anthropic.*` profiles, redeploy after pulling the latest code.\n\n### Marketplace Subscription Error on First Invocation\n\nAfter a fresh deployment, the first model invocation may fail with:\n\n```\nAccessDeniedException: Model access is denied due to IAM user or service role is not authorized\nto perform the required AWS Marketplace actions (aws-marketplace:ViewSubscriptions,\naws-marketplace:Subscribe)\n```\n\nThe IAM stack now includes `aws-marketplace:ViewSubscriptions` and `aws-marketplace:Subscribe` permissions. If you see this error on an older deployment, redeploy the IAM stack. As a workaround, manually invoke the model once in the Bedrock console playground to trigger the Marketplace subscription.\n\n---\n\n## Notices\n\nCustomers are responsible for making their own independent assessment of the information in this Guidance. This Guidance: (a) is for informational purposes only, (b) represents AWS current product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or services are provided \"as is\" without warranties, representations, or conditions of any kind, whether express or implied. AWS responsibilities and liabilities to its customers are controlled by AWS agreements, and this Guidance is not part of, nor does it modify, any agreement between AWS and its customers.\n\n---\n\n## Authors\n- Randall Potter\n\n---\n\n## 📖 Further Reading\n\n### 🤖 Amazon Bedrock \u0026 Foundation Models\n- [Amazon Bedrock Developer Experience](https://aws.amazon.com/bedrock/developer-experience/) - Foundation model choice and customization\n- [Anthropic's Claude in Amazon Bedrock](https://aws.amazon.com/bedrock/anthropic/) - Claude Opus 4.6, Sonnet 4.5, Haiku 4.5 hybrid reasoning models\n- [Claude Sonnet 4.5 in Amazon Bedrock](https://aws.amazon.com/blogs/aws/introducing-claude-sonnet-4-5-in-amazon-bedrock-anthropics-most-intelligent-model-best-for-coding-and-complex-agents/) - Most intelligent model for coding and complex agents\n- [Claude Opus 4.6 in Amazon Bedrock](https://aws.amazon.com/blogs/machine-learning/claude-opus-4-5-now-in-amazon-bedrock/) - Tool search, extended thinking, and agent capabilities\n- [Amazon Nova Foundation Models](https://aws.amazon.com/blogs/aws/introducing-amazon-nova-frontier-intelligence-and-industry-leading-price-performance/) - Nova Micro, Lite, Pro, Premier - frontier intelligence\n- [Using Amazon Nova in AI Agents](https://docs.aws.amazon.com/nova/latest/userguide/agents-use-nova.html) - Nova as foundation model for agents\n\n### 🚀 Amazon Bedrock AgentCore\n- [Amazon Bedrock AgentCore Overview](https://aws.amazon.com/bedrock/agentcore/) - Build, deploy, and operate agents at scale\n- [AgentCore Gateway Guide](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/gateway-building.html) - Set up unified tool connectivity\n- [AgentCore Gateway Blog](https://aws.amazon.com/blogs/machine-learning/introducing-amazon-bedrock-agentcore-gateway-transforming-enterprise-ai-agent-tool-development/) - Transforming enterprise AI agent tool development\n- [AgentCore Runtime](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/agents-tools-runtime.html) - Secure serverless hosting for AI agents\n\n### ⚡ AWS Lambda\n- [Lambda Layers Overview](https://docs.aws.amazon.com/lambda/latest/dg/chapter-layers.html) - Managing dependencies with layers\n- [Python Lambda Layers](https://docs.aws.amazon.com/lambda/latest/dg/python-layers.html) - Working with layers for Python functions\n- [Adding Layers to Functions](https://docs.aws.amazon.com/lambda/latest/dg/adding-layers.html) - Layer configuration and management\n\n### 🔐 Amazon Cognito\n- [OAuth 2.0 Grants](https://docs.aws.amazon.com/cognito/latest/developerguide/federation-endpoints-oauth-grants.html) - Authorization code, implicit, and client credentials\n- [M2M Authorization](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-define-resource-servers.html) - Scopes, resource servers, and machine-to-machine auth\n- [M2M Security Best Practices](https://aws.amazon.com/blogs/security/how-to-monitor-optimize-and-secure-amazon-cognito-machine-to-machine-authorization/) - Monitor, optimize, and secure M2M authorization\n\n### 📈 Observability\n- [CloudWatch + X-Ray Integration](https://docs.aws.amazon.com/xray/latest/devguide/xray-services-cloudwatch.html) - Enhanced application monitoring\n- [Cross-Account Tracing](https://docs.aws.amazon.com/xray/latest/devguide/xray-console-crossaccount.html) - Distributed tracing across accounts\n- [AWS Observability Best Practices](https://aws.amazon.com/blogs/publicsector/building-resilient-public-services-with-aws-observability-best-practices/) - Logs, metrics, and traces\n\n### 📦 Amazon S3\n- [S3 as Data Lake Storage](https://docs.aws.amazon.com/whitepapers/latest/building-data-lakes/amazon-s3-data-lake-storage-platform.html) - Central storage platform best practices\n- [S3 Performance Optimization](https://aws.amazon.com/s3/whitepaper-best-practices-s3-performance/) - Design patterns for optimal performance\n\n### 💻 Amazon Kiro IDE\n- [Amazon Kiro Overview](https://aws.amazon.com/kiro/) - Agentic IDE for spec-driven development\n- [Kiro with AWS Builder ID](https://docs.aws.amazon.com/signin/latest/userguide/builder_id-apps.html) - Sign in and get started with Kiro\n- [Nova Act IDE Extension](https://aws.amazon.com/blogs/aws/accelerate-ai-agent-development-with-the-nova-act-ide-extension/) - Accelerate AI agent development in Kiro\n- [Production-Ready AI Agents at Scale](https://aws.amazon.com/blogs/machine-learning/enabling-customers-to-deliver-production-ready-ai-agents-at-scale/) - Kiro as part of the agent development ecosystem\n","funding_links":[],"categories":["Community Projects"],"sub_categories":["For PyPI Packages"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faws-samples%2Fsample-badgers","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Faws-samples%2Fsample-badgers","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faws-samples%2Fsample-badgers/lists"}