{"id":45304570,"url":"https://github.com/nanodba/sp_statupdate","last_synced_at":"2026-04-23T22:01:05.960Z","repository":{"id":333550547,"uuid":"1137733116","full_name":"nanoDBA/sp_StatUpdate","owner":"nanoDBA","description":"Priority-based statistics maintenance for SQL Server 2016+","archived":false,"fork":false,"pushed_at":"2026-04-20T23:07:28.000Z","size":7917,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-21T00:40:15.957Z","etag":null,"topics":["database-maintenance","dba-tools","microsoft-sql-server","ms-sql-server","performance","performance-analysis","query-store","sql-server","statistics","statistics-maintenance","stored-procedure","t-sql","t-sql-scripts"],"latest_commit_sha":null,"homepage":null,"language":"TSQL","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/nanoDBA.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-01-19T19:01:08.000Z","updated_at":"2026-04-20T23:06:44.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/nanoDBA/sp_StatUpdate","commit_stats":null,"previous_names":["nanodba/sp_statupdate"],"tags_count":28,"template":false,"template_full_name":null,"purl":"pkg:github/nanoDBA/sp_StatUpdate","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nanoDBA%2Fsp_StatUpdate","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nanoDBA%2Fsp_StatUpdate/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nanoDBA%2Fsp_StatUpdate/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nanoDBA%2Fsp_StatUpdate/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/nanoDBA","download_url":"https://codeload.github.com/nanoDBA/sp_StatUpdate/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nanoDBA%2Fsp_StatUpdate/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":32200159,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-23T20:19:26.138Z","status":"ssl_error","status_checked_at":"2026-04-23T20:19:23.520Z","response_time":53,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["database-maintenance","dba-tools","microsoft-sql-server","ms-sql-server","performance","performance-analysis","query-store","sql-server","statistics","statistics-maintenance","stored-procedure","t-sql","t-sql-scripts"],"created_at":"2026-02-21T06:16:31.618Z","updated_at":"2026-04-23T22:01:05.942Z","avatar_url":"https://github.com/nanoDBA.png","language":"TSQL","readme":"# sp_StatUpdate\n\n**Priority-based statistics maintenance for SQL Server 2016+**\n\n*Updates worst stats first. Stops when you tell it to. Tells you if it got killed.*\n\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![SQL Server 2016+](https://img.shields.io/badge/SQL%20Server-2016%2B-blue.svg)](https://www.microsoft.com/sql-server)\n[![Azure SQL](https://img.shields.io/badge/Azure%20SQL-Supported-0078D4.svg)](https://azure.microsoft.com/products/azure-sql)\n\n## Why This Exists\n\n| Problem | Fix |\n|---------|-----|\n| Alphabetical ordering | `@SortOrder = 'MODIFICATION_COUNTER'` -- worst first |\n| 10-hour jobs killed at 5 AM | `@TimeLimit` -- stops gracefully, logs what's left |\n| \"Did it finish or get killed?\" | START/END markers in CommandLog |\n| NORECOMPUTE orphans | `@TargetNorecompute = 'Y'` -- finds and refreshes them |\n| Large stats that never finish | `@LongRunningThresholdMinutes` -- auto-reduce sample rate |\n| Query Store knows what's hot | `@QueryStore = 'CPU'` -- prioritize by workload metric |\n| QS enrichment too slow | `@QueryStoreTopPlans = 200` -- parse only top N plans |\n| Cascading failures | `@FailFast = 1` -- stop on first error |\n| AG secondary falls behind | `@MaxAGRedoQueueMB` -- pauses when redo queue is deep |\n| tempdb pressure during FULLSCAN | `@MinTempdbFreeMB` -- checks before each stat update |\n| Azure DTU/vCore concerns | Auto-detects Azure SQL DB vs MI, platform-specific warnings |\n| Priority pass finishes early | `@MopUpPass = 'Y'` -- broad sweep with remaining time |\n\n## Quick Start\n\n```sql\n-- 1. Install prerequisites (Ola Hallengren's CommandLog table)\n-- Download from: https://ola.hallengren.com/scripts/CommandLog.sql\n\n-- 2. Install sp_StatUpdate\n-- Run sp_StatUpdate.sql in your maintenance database\n\n-- 3. Run statistics maintenance\nEXEC dbo.sp_StatUpdate\n    @Databases = N'YourDatabase',\n    @TimeLimit = 3600;                  -- 1 hour limit\n    -- Defaults: @Preset='DEFAULT', @TargetNorecompute='BOTH', @LogToTable='Y'\n```\n\n## Requirements\n\n**DROP-IN COMPATIBLE** with [Ola Hallengren's SQL Server Maintenance Solution](https://ola.hallengren.com).\n\n| Requirement | Details |\n|-------------|---------|\n| **SQL Server** | 2016+ (uses `STRING_SPLIT`). 2016 SP2+ recommended for MAXDOP support |\n| **Azure SQL** | Database (EngineEdition 5), Managed Instance (8), and Edge (9) supported |\n| **dbo.CommandLog** | [CommandLog.sql](https://ola.hallengren.com/scripts/CommandLog.sql) or set `@LogToTable = 'N'` |\n| **dbo.Queue** | [Queue.sql](https://ola.hallengren.com/scripts/Queue.sql) -- only for `@StatsInParallel = 'Y'` |\n\n**Note**: `dbo.QueueStatistic` is auto-created on first parallel run. `dbo.CommandExecute` is NOT required.\n\n## Presets\n\nv3 uses a preset-first API.  Choose a preset, then override individual parameters as needed.  Explicit parameters always win over preset defaults.\n\n| Preset | TimeLimit | SortOrder | QueryStore | MopUp | Sample | Description |\n|--------|-----------|-----------|------------|-------|--------|-------------|\n| `DEFAULT` | 18000 (5h) | MODIFICATION_COUNTER | OFF | N | auto | Balanced default for any workload |\n| `NIGHTLY` | 3600 (1h) | QUERY_STORE | CPU | Y | auto | QS-prioritized nightly job with mop-up |\n| `WEEKLY_FULL` | 14400 (4h) | QUERY_STORE | CPU | Y | 100 | Comprehensive weekly: FULLSCAN + lower thresholds |\n| `OLTP_LIGHT` | 1800 (30m) | MODIFICATION_COUNTER | OFF | N | auto | Low-impact OLTP: high threshold, inter-stat delay |\n| `WAREHOUSE` | unlimited | ROWS | OFF | N | 100 | Data warehouse: FULLSCAN, no time limit |\n\n```sql\n-- Nightly maintenance (1hr, QS-prioritized, mop-up)\nEXEC dbo.sp_StatUpdate @Preset = 'NIGHTLY', @Databases = 'USER_DATABASES';\n\n-- Weekly comprehensive (4hr, FULLSCAN)\nEXEC dbo.sp_StatUpdate @Preset = 'WEEKLY_FULL', @Databases = 'USER_DATABASES';\n\n-- OLTP with minimal impact (30min, high thresholds, delays)\nEXEC dbo.sp_StatUpdate @Preset = 'OLTP_LIGHT', @Databases = 'MyOLTPDatabase';\n\n-- Data warehouse full refresh (no limit, FULLSCAN)\nEXEC dbo.sp_StatUpdate @Preset = 'WAREHOUSE', @Databases = 'MyDW';\n\n-- Preset + overrides: NIGHTLY preset but with 2hr time limit\nEXEC dbo.sp_StatUpdate @Preset = 'NIGHTLY', @Databases = 'USER_DATABASES', @TimeLimit = 7200;\n```\n\n## Common Scenarios\n\n### Time-Limited Nightly Runs\n\n```sql\n-- Nightly job: 11 PM - 4 AM window (5 hours)\nEXEC dbo.sp_StatUpdate\n    @Databases = N'USER_DATABASES, -DevDB, -ReportingDB',\n    @TimeLimit = 18000,\n    @SortOrder = N'MODIFICATION_COUNTER';  -- Worst stats first\n```\n\n### Query Store-Driven Prioritization\n\n```sql\n-- Let Query Store tell you what matters\nEXEC dbo.sp_StatUpdate\n    @Databases = N'Production',\n    @QueryStore = N'CPU',                   -- Or DURATION, READS, AVG_CPU, MEMORY_GRANT, etc.\n    @SortOrder = N'QUERY_STORE',\n    @TimeLimit = 3600;\n\n-- Large QS catalog? Limit XML plan parsing to top 200 plans by CPU\nEXEC dbo.sp_StatUpdate\n    @Databases = N'Production',\n    @QueryStore = N'CPU',\n    @QueryStoreTopPlans = 200,              -- Default 500. NULL = unlimited\n    @SortOrder = N'QUERY_STORE',\n    @TimeLimit = 3600;\n```\n\n**Available QS metrics:** `CPU`, `DURATION`, `READS`, `EXECUTIONS`, `AVG_CPU`, `MEMORY_GRANT`, `TEMPDB_SPILLS` (SQL 2017+), `PHYSICAL_READS`, `AVG_MEMORY`, `WAITS`\n\n**Performance note:** Phase 6 (QS enrichment) parses plan XML to find table references. On databases with 10,000+ QS plans, this can take minutes. `@QueryStoreTopPlans` limits XML parsing to the most impactful plans. The proc also skips Phase 6 entirely when QS has no recent runtime stats within `@QueryStoreRecentHours`.\n\n### NORECOMPUTE Stats Refresh\n\n```sql\n-- Find and refresh forgotten NORECOMPUTE stats\nEXEC dbo.sp_StatUpdate\n    @Databases = N'Production',\n    @TargetNorecompute = N'Y',\n    @ModificationThreshold = 50000,\n    @TimeLimit = 1800;\n```\n\n### AG-Safe Maintenance\n\n```sql\n-- Pause if any secondary falls behind by 500 MB redo\nEXEC dbo.sp_StatUpdate\n    @Databases = N'USER_DATABASES',\n    @MaxAGRedoQueueMB = 500,\n    @MaxAGWaitMinutes = 10,                 -- Wait up to 10 min for drain\n    @TimeLimit = 3600;\n```\n\n### Adaptive Sampling for Slow Stats\n\n```sql\n-- Stats that historically took \u003e2 hours get 5% sample\nEXEC dbo.sp_StatUpdate\n    @Databases = N'Production',\n    @LongRunningThresholdMinutes = 120,\n    @LongRunningSamplePercent = 5,\n    @TimeLimit = 14400;\n```\n\n### Dry Run Preview\n\n```sql\nEXEC dbo.sp_StatUpdate\n    @Databases = N'Production',\n    @Execute = N'N',\n    @WhatIfOutputTable = N'#Preview',\n    @Debug = 1;\n\nSELECT * FROM #Preview ORDER BY SequenceNum;\n```\n\n### Mop-Up Pass (Use Remaining Time)\n\n```sql\n-- 2-hour window: priority pass first, then broad sweep with remaining time\nEXEC dbo.sp_StatUpdate\n    @Databases = N'USER_DATABASES',\n    @TimeLimit = 7200,\n    @MopUpPass = N'Y',\n    @MopUpMinRemainingSeconds = 120;\n```\n\nThe priority pass applies your configured thresholds and sort order.  If it completes\nwith time to spare, the mop-up pass discovers every stat with `modification_counter \u003e 0`\nthat wasn't already updated in this run and processes them by modification count descending.\nRequires `@LogToTable = 'Y'` and `@Execute = 'Y'`.  Not compatible with `@StatsInParallel`.\n\n### Absolute Stop Time\n\n```sql\n-- Stop at 4 AM regardless of when the job started\nEXEC dbo.sp_StatUpdate\n    @Databases = N'USER_DATABASES',\n    @StopByTime = N'04:00';\n```\n\n## Parameter Reference\n\nRun `EXEC sp_StatUpdate @Help = 1` for complete documentation including operational notes and preset details.\n\n### v3 API Summary\n\nv3 has **33 input parameters** (was 58 in v2) plus **10 OUTPUT parameters**.  25 parameters from v2 were absorbed into preset-controlled internal variables.  Explicit parameters always override preset defaults.\n\n### Database \u0026 Table Selection\n\n| Parameter | Default | Description |\n|-----------|---------|-------------|\n| `@Statistics` | `NULL` | Direct stat references: `'Schema.Table.Stat'` (comma-separated, skips discovery) |\n| `@Databases` | Current DB | `USER_DATABASES`, `SYSTEM_DATABASES`, `ALL_DATABASES`, `AVAILABILITY_GROUP_DATABASES`, wildcards (`%Prod%`), exclusions (`-DevDB`) |\n| `@Tables` | All | Table filter (comma-separated `Schema.Table`) |\n| `@ExcludeTables` | `NULL` | Exclude tables by LIKE pattern (`%Archive%`) |\n| `@ExcludeStatistics` | `NULL` | Exclude stats by LIKE pattern (`_WA_Sys%`) |\n\n### Preset \u0026 Threshold Configuration\n\n| Parameter | Default | Description |\n|-----------|---------|-------------|\n| `@Preset` | `'DEFAULT'` | `DEFAULT`, `NIGHTLY`, `WEEKLY_FULL`, `OLTP_LIGHT`, `WAREHOUSE` |\n| `@TargetNorecompute` | `'BOTH'` | `'Y'`=NORECOMPUTE only, `'N'`=regular only, `'BOTH'`=all |\n| `@ModificationThreshold` | Preset-dependent | Minimum modifications to qualify (DEFAULT=5000) |\n| `@StaleHours` | `NULL` | Minimum hours since last update |\n\n### Execution Control\n\n| Parameter | Default | Description |\n|-----------|---------|-------------|\n| `@TimeLimit` | Preset-dependent | Max seconds (DEFAULT=18000). `NULL` = unlimited |\n| `@StopByTime` | `NULL` | Absolute wall-clock stop time (`'04:00'` = 4 AM). Overrides `@TimeLimit` |\n| `@BatchLimit` | `NULL` | Max stats per run |\n| `@SortOrder` | Preset-dependent | Priority order (see below) |\n| `@MopUpPass` | Preset-dependent | `'Y'` = broad sweep after priority pass |\n| `@Execute` | `'Y'` | `'N'` for dry run |\n| `@FailFast` | `0` | `1` = abort on first error |\n\n### Sort Orders\n\n| Value | Description |\n|-------|-------------|\n| `MODIFICATION_COUNTER` | Most modifications first (DEFAULT/OLTP_LIGHT preset) |\n| `QUERY_STORE` | Highest Query Store metric first (NIGHTLY/WEEKLY_FULL preset) |\n| `ROWS` | Largest tables first (WAREHOUSE preset) |\n| `DAYS_STALE` | Oldest stats first |\n| `PAGE_COUNT` | Largest tables by page count first |\n| `FILTERED_DRIFT` | Filtered stats with drift first |\n| `AUTO_CREATED` | User-created stats before auto-created |\n| `RANDOM` | Random order |\n\n### Query Store Integration\n\n| Parameter | Default | Description |\n|-----------|---------|-------------|\n| `@QueryStore` | Preset-dependent | `'OFF'` or metric name: `CPU`, `DURATION`, `READS`, `EXECUTIONS`, `AVG_CPU`, `MEMORY_GRANT`, `TEMPDB_SPILLS`, `PHYSICAL_READS`, `AVG_MEMORY`, `WAITS` |\n| `@QueryStoreTopPlans` | `500` | Max plans to XML-parse. `NULL` = unlimited. Lower = faster Phase 6 |\n| `@QueryStoreMinExecutions` | `100` | Minimum plan executions to boost |\n| `@QueryStoreRecentHours` | `168` | Only consider plans from last N hours (7 days) |\n\n### Sampling\n\n| Parameter | Default | Description |\n|-----------|---------|-------------|\n| `@StatisticsSample` | Preset-dependent | `NULL`=SQL Server decides, `100`=FULLSCAN |\n| `@MaxDOP` | `NULL` | MAXDOP for UPDATE STATISTICS (SQL 2016 SP2+) |\n| `@LongRunningThresholdMinutes` | `NULL` | Stats that took longer get forced sample rate |\n| `@LongRunningSamplePercent` | `10` | Sample percent for long-running stats |\n\n### Safety Checks\n\n| Parameter | Default | Description |\n|-----------|---------|-------------|\n| `@MaxAGRedoQueueMB` | `NULL` | Pause when AG secondary redo queue exceeds this MB |\n| `@MaxAGWaitMinutes` | `10` | Max minutes to wait for redo queue to drain |\n| `@MinTempdbFreeMB` | `NULL` | Min tempdb free space (MB). `@FailFast=1` aborts, else warns |\n\n### Logging \u0026 Output\n\n\u003e **Two-phase logging:** Like Ola Hallengren's `CommandExecute`, sp_StatUpdate inserts a CommandLog row with NULL `EndTime` before each stat update, then updates `EndTime` on completion. Query in-progress stats with: `SELECT * FROM dbo.CommandLog WHERE EndTime IS NULL AND CommandType = 'UPDATE_STATISTICS';`\n\n| Parameter | Default | Description |\n|-----------|---------|-------------|\n| `@LogToTable` | `'Y'` | Log to dbo.CommandLog |\n| `@WhatIfOutputTable` | `NULL` | Table for dry-run commands (`@Execute = 'N'` required) |\n| `@MopUpMinRemainingSeconds` | `60` | Minimum seconds remaining to trigger mop-up |\n| `@Debug` | `0` | `1` = verbose diagnostic output |\n\n### Parallel Execution\n\n| Parameter | Default | Description |\n|-----------|---------|-------------|\n| `@StatsInParallel` | `'N'` | `'Y'` = queue-based parallel processing via `dbo.QueueStatistic` |\n\n### OUTPUT Parameters\n\n| Parameter | Description |\n|-----------|-------------|\n| `@Version` | Procedure version string |\n| `@VersionDate` | Procedure version date |\n| `@StatsFoundOut` | Total qualifying stats discovered |\n| `@StatsProcessedOut` | Stats attempted (succeeded + failed) |\n| `@StatsSucceededOut` | Stats updated successfully |\n| `@StatsFailedOut` | Stats that failed to update |\n| `@StatsRemainingOut` | Stats not processed (time/batch limit) |\n| `@DurationSecondsOut` | Total run duration in seconds |\n| `@WarningsOut` | Collected warnings (see below) |\n| `@StopReasonOut` | Why execution stopped (see below) |\n\n### StopReason Values\n\n`COMPLETED`, `TIME_LIMIT`, `BATCH_LIMIT`, `FAIL_FAST`, `CONSECUTIVE_FAILURES`, `AG_REDO_QUEUE`, `TEMPDB_PRESSURE`, `NO_QUALIFYING_STATS`, `KILLED`\n\n### Warning Values\n\n`LOW_UPTIME`, `BACKUP_RUNNING`, `AZURE_SQL`, `AZURE_MI`, `RESOURCE_GOVERNOR`, `AG_REDO_ELEVATED`, `TEMPDB_LOW`, `RLS_DETECTED`, `COLUMNSTORE_CONTEXT`, `QS_FORCED_PLANS`, `LOG_SPACE_HIGH`, `WIDE_STATS`, `FILTER_MISMATCH`\n\n## Environment Detection\n\nDebug mode (`@Debug = 1`) automatically reports:\n\n- **SQL Server version** and build number\n- **Cardinality Estimator** version per database (Legacy CE 70 vs New CE 120+)\n- **Trace flags** affecting statistics (2371, 9481, 2389/2390, 4139)\n- **DB-scoped configs** (LEGACY_CARDINALITY_ESTIMATION)\n- **Hardware context** (CPU count, memory, NUMA nodes, uptime)\n- **Azure platform** (SQL DB vs Managed Instance vs Edge, with platform-specific guidance)\n- **AG primary status** and redo queue depth\n- **tempdb free space**\n- **Resource Governor** active resource pools\n\n### Per-Database Detection\n\nWhen `@Debug = 1`, after discovery the proc checks each database for:\n\n- **Row-Level Security** policies that may bias histograms\n- **Wide statistics** (\u003e8 columns) that increase tempdb/memory pressure\n- **Filtered index mismatches** where stat filter differs from index filter\n- **Columnstore indexes** where `modification_counter` underreports\n- **Non-persisted computed columns** with evaluation cost during stat updates\n- **Stretch Database** tables (auto-skipped, deprecated feature)\n- **Query Store forced plans** on updated tables (post-update check, automatic)\n- **Transaction log space** \u003e90% full during FULLSCAN operations\n\n## Monitoring\n\n### Summary Result Set\n\nEvery run returns a summary row:\n\n```text\nStatus         StatusMessage                                      StatsFound  ...\n-------------- -------------------------------------------------- ----------  ---\nSUCCESS        All 142 stat(s) updated successfully                142         ...\nWARNING        Incomplete: 47 stat(s) remaining (TIME_LIMIT)       189        ...\nERROR          Failed: 3 stat(s), 47 remaining (FAIL_FAST)         150        ...\n```\n\n### Run History\n\n```sql\n-- Recent runs: did they finish or get killed?\nSELECT\n    CASE WHEN e.ID IS NOT NULL THEN 'Completed' ELSE 'KILLED' END AS Status,\n    s.StartTime,\n    e.ExtendedInfo.value('(/Summary/StopReason)[1]', 'nvarchar(50)') AS StopReason,\n    e.ExtendedInfo.value('(/Summary/StatsProcessed)[1]', 'int') AS Processed,\n    e.ExtendedInfo.value('(/Summary/StatsRemaining)[1]', 'int') AS Remaining\nFROM dbo.CommandLog s\nLEFT JOIN dbo.CommandLog e\n    ON e.CommandType = 'SP_STATUPDATE_END'\n    AND e.ExtendedInfo.value('(/Summary/RunLabel)[1]', 'nvarchar(100)') =\n        s.ExtendedInfo.value('(/Parameters/RunLabel)[1]', 'nvarchar(100)')\nWHERE s.CommandType = 'SP_STATUPDATE_START'\nORDER BY s.StartTime DESC;\n```\n\n### Programmatic Access\n\n```sql\nDECLARE @Found int, @Processed int, @Failed int, @Remaining int,\n        @StopReason nvarchar(50), @Warnings nvarchar(max);\n\nEXEC dbo.sp_StatUpdate\n    @Databases = N'Production',\n    @TimeLimit = 3600,\n    @StatsFoundOut = @Found OUTPUT,\n    @StatsProcessedOut = @Processed OUTPUT,\n    @StatsFailedOut = @Failed OUTPUT,\n    @StatsRemainingOut = @Remaining OUTPUT,\n    @StopReasonOut = @StopReason OUTPUT,\n    @WarningsOut = @Warnings OUTPUT;\n\n-- Use outputs for alerting, logging, or conditional logic\nIF @Failed \u003e 0 OR @StopReason = 'CONSECUTIVE_FAILURES'\n    RAISERROR(N'Alert: Statistics maintenance had failures', 10, 1) WITH NOWAIT;\n\nIF @Warnings LIKE '%AG_REDO_ELEVATED%'\n    RAISERROR(N'Note: AG redo queue was elevated during maintenance', 10, 1) WITH NOWAIT;\n```\n\n### Real-Time Progress\n\n```sql\n-- Log progress to CommandLog every 50 stats (secure, access-controlled)\nEXEC sp_StatUpdate @Databases = 'USER_DATABASES', @TimeLimit = 3600;\n-- Query in-progress stats:\nSELECT * FROM dbo.CommandLog WHERE EndTime IS NULL AND CommandType = 'UPDATE_STATISTICS';\n```\n\n## Diagnostic Tool\n\n**sp_StatUpdate_Diag** analyzes CommandLog history and produces actionable recommendations. Two viewing modes: management-friendly dashboard or full DBA deep-dive.\n\n### Management View (Default)\n\n```sql\n-- Show your boss: letter grades + plain English recommendations\nEXEC dbo.sp_StatUpdate_Diag;\n```\n\nReturns 2 result sets:\n\n| RS | Name | What It Shows |\n|----|------|---------------|\n| 1 | **Executive Dashboard** | A-F letter grades for Overall, Completion, Reliability, Speed, and Workload Focus |\n| 2 | **Recommendations** | Severity-categorized findings with fix-it SQL. Includes I10: a synthesized `EXEC sp_StatUpdate` call tuned to your environment |\n\nExample dashboard output:\n\n```text\nCategory         Grade  Score  Headline\n---------------- -----  -----  --------------------------------------------------------\nOVERALL          B      75     Statistics maintenance is healthy with minor opportunities...\nCOMPLETION       A      92     Nearly all qualifying statistics are being updated each run.\nRELIABILITY      C      55     2 run(s) were killed before completing. Check SQL Agent...\nSPEED            A      90     Statistics are being updated very quickly (0.4 sec/stat).\nWORKLOAD FOCUS   B      78     Query Store prioritization is working well.\n```\n\n### DBA Deep-Dive\n\n```sql\n-- Full technical detail: 13 result sets\nEXEC dbo.sp_StatUpdate_Diag @ExpertMode = 1;\n```\n\n| RS | Name | Description |\n|----|------|-------------|\n| 1 | Executive Dashboard | Letter grades (always returned) |\n| 2 | Recommendations | Findings with remediation SQL (always returned) |\n| 3 | Run Health Summary | Aggregate metrics: total runs, killed, completion %, QS coverage |\n| 4 | Run Detail | Per-run: duration, stats found/processed, stop reason, efficacy |\n| 5 | Top Tables | Tables consuming the most maintenance time |\n| 6 | Failing Statistics | Stats with repeated errors |\n| 7 | Long-Running Statistics | Stats exceeding the duration threshold |\n| 8 | Parameter Change History | How parameters changed across runs |\n| 9 | Obfuscation Map | Only when `@Obfuscate = 1` |\n| 10 | Efficacy Trend (Weekly) | Week-over-week QS prioritization metrics |\n| 11 | Efficacy Detail (Per-Run) | Run-over-run with delta vs prior |\n| 12 | High-CPU Stat Positions | Top-workload stats from most recent run |\n| 13 | QS Performance Correlation | Per-stat CPU trend: are queries getting faster after updates? |\n\n### Proving QS Prioritization Value\n\nAfter switching from modification-counter to Query Store CPU-based sort order, show leadership the impact:\n\n```sql\n-- Broad trending: last 100 days, close-up on last 14\nEXEC dbo.sp_StatUpdate_Diag\n    @ExpertMode = 1,\n    @EfficacyDaysBack = 100,\n    @EfficacyDetailDays = 14;\n```\n\nKey result sets for this story:\n\n- **RS 10 (Efficacy Trend)**: Weekly roll-up showing high-CPU stats reaching first quartile, workload coverage %, trend direction\n- **RS 11 (Efficacy Detail)**: Per-run showing completion %, time-to-critical stats, delta vs prior run\n- **RS 13 (QS Performance Correlation)**: Per-stat CPU before vs after -- \"5 of 5 tracked stats show 8% lower query CPU\"\n- **I7 check**: Automatically detects the configuration change point and compares before/after metrics\n- **I8 check**: Summarizes whether queries are actually faster after stat updates\n\n### Persistent History\n\nThe diagnostic tool auto-creates `dbo.StatUpdateDiagHistory` to track health scores over time. Each run inserts only new data (watermark-based, no duplicates).\n\n```sql\n-- View health score trend\nSELECT CapturedAt, RunLabel, HealthScore, OverallGrade, CompletionPct, WorkloadCoveragePct\nFROM dbo.StatUpdateDiagHistory\nORDER BY CapturedAt DESC;\n\n-- Skip history creation (testing or ephemeral environments)\nEXEC dbo.sp_StatUpdate_Diag @SkipHistory = 1;\n```\n\n### Grade Overrides\n\nCustomize the Executive Dashboard when you know certain issues are expected or irrelevant.\n\n#### @GradeOverrides -- Force Grades or Ignore Categories\n\nForce a specific letter grade (A/B/C/D/F) or exclude a category entirely (IGNORE).\n\n```sql\n-- \"I know about the 2 killed runs -- don't penalize the score\"\nEXEC dbo.sp_StatUpdate_Diag @GradeOverrides = 'RELIABILITY=A';\n\n-- \"I don't use Query Store -- exclude workload focus from my score\"\nEXEC dbo.sp_StatUpdate_Diag @GradeOverrides = 'WORKLOAD=IGNORE';\n\n-- Force multiple: known slow stats + don't care about QS\nEXEC dbo.sp_StatUpdate_Diag\n    @GradeOverrides = 'SPEED=B, WORKLOAD=IGNORE';\n\n-- \"Only care about completion and speed -- ignore everything else\"\nEXEC dbo.sp_StatUpdate_Diag\n    @GradeOverrides = 'RELIABILITY=IGNORE, WORKLOAD=IGNORE';\n\n-- Force a low grade to flag a known problem for management visibility\nEXEC dbo.sp_StatUpdate_Diag @GradeOverrides = 'COMPLETION=F';\n```\n\n**Valid categories:** `COMPLETION`, `RELIABILITY`, `SPEED`, `WORKLOAD`\n**Valid values:** `A`, `B`, `C`, `D`, `F` (force grade), `IGNORE` (exclude from OVERALL score)\n\n#### @GradeWeights -- Custom Category Weights\n\nChange how much each category contributes to the OVERALL score. Values are integers that auto-normalize to sum to 100%.\n\n```sql\n-- Default weights: COMPLETION=30, RELIABILITY=25, SPEED=20, WORKLOAD=25\n\n-- Single category override: bump completion importance\n-- 50 + 25(default) + 20(default) + 25(default) = 120 -\u003e normalized to 42/21/17/21\nEXEC dbo.sp_StatUpdate_Diag @GradeWeights = 'COMPLETION=50';\n\n-- Two categories: only care about completion and speed equally\n-- 50 + 25(default) + 50 + 25(default) = 150 -\u003e normalized to 33/17/33/17\nEXEC dbo.sp_StatUpdate_Diag @GradeWeights = 'COMPLETION=50, SPEED=50';\n\n-- Weight=0 is the same as IGNORE -- excludes category from OVERALL\nEXEC dbo.sp_StatUpdate_Diag @GradeWeights = 'WORKLOAD=0';\n\n-- All four explicit (auto-normalized, don't need to sum to 100)\nEXEC dbo.sp_StatUpdate_Diag\n    @GradeWeights = 'COMPLETION=40, RELIABILITY=10, SPEED=30, WORKLOAD=20';\n```\n\n#### Combining Overrides and Weights\n\n```sql\n-- Force reliability to A (known kills are expected) AND\n-- weight completion heavily for management reporting\nEXEC dbo.sp_StatUpdate_Diag\n    @GradeOverrides = 'RELIABILITY=A',\n    @GradeWeights = 'COMPLETION=40, WORKLOAD=40';\n```\n\n#### Dashboard Output with Overrides\n\n```\nCategory        Grade  Score  Headline\n--------------  -----  -----  ---------------------------------------------------\nOVERALL         B         86  Statistics maintenance is healthy... [Overrides active]\nCOMPLETION      A        100  Nearly all qualifying statistics are being updated...\nRELIABILITY     A         28  [OVERRIDE: A] 2 run(s) were killed...\nSPEED           -          0  [IGNORED] Updates are slow at 17.9 sec/stat...\nWORKLOAD FOCUS  D         50  Query Store prioritization is not enabled...\n```\n\n- `[OVERRIDE: A]` -- grade forced; Detail column shows `(actual score: 28)`\n- `[IGNORED]` -- excluded from OVERALL; Grade=`-`, Score=0\n- `[Overrides active]` -- shown on OVERALL when any override/weight change is active\n\n**Weight normalization:** Weights are integers that auto-normalize to sum to 100%. Passing a single category (e.g., `'COMPLETION=50'`) keeps the other three at their defaults (25, 20, 25), then all four are normalized together (50+25+20+25=120 -\u003e 42/21/17/21%). Weight of 0 is equivalent to IGNORE.\n\n**Note:** History table (`StatUpdateDiagHistory`) always uses hardcoded weights (30/25/20/25) -- overrides only affect the current dashboard view, not persisted scores.\n\n### Obfuscated Mode\n\nHash all database, table, and statistics names for safe external sharing. Prefixes (`IX_`, `PK_`, `DB_`, `_WA_Sys_`) are preserved so consultants can still reason about object types.\n\n#### Quick Start: Share a Report with a Consultant\n\n```sql\n-- 1. Generate obfuscated report with a seed (keeps tokens stable across runs)\nEXEC dbo.sp_StatUpdate_Diag\n    @Obfuscate = 1,\n    @ExpertMode = 1,\n    @ObfuscationSeed = N'acme-2026-q1';\n```\n\nOutput tokens look like: `DB_7f2a`, `TBL_e4c1`, `IX_STAT_9b3d`. The seed ensures the same object always maps to the same token -- so if a consultant says \"TBL_e4c1 is slow\", you can decode it consistently.\n\n#### T-SQL Examples\n\n```sql\n-- Basic: one-off obfuscated output (random hashes, no persistence)\nEXEC dbo.sp_StatUpdate_Diag @Obfuscate = 1, @ExpertMode = 1;\n\n-- Seeded: deterministic hashes (same name = same token every time)\nEXEC dbo.sp_StatUpdate_Diag\n    @Obfuscate = 1,\n    @ExpertMode = 1,\n    @ObfuscationSeed = N'acme-2026-q1';\n\n-- Seeded + persisted map table: decode tokens later without re-running\nEXEC dbo.sp_StatUpdate_Diag\n    @Obfuscate = 1,\n    @ExpertMode = 1,\n    @ObfuscationSeed = N'acme-2026-q1',\n    @ObfuscationMapTable = N'dbo.DiagObfMap';\n\n-- After running with @ObfuscationMapTable, the proc prints a decode query:\n--   === Decode obfuscated tokens ===\n--   SELECT ObjectType, OriginalName, ObfuscatedName\n--   FROM dbo.DiagObfMap WHERE ObfuscatedName = N'\u003cpaste_token_here\u003e';\n```\n\n#### PowerShell: Multi-Server Obfuscated Reports\n\nWhen running the wrapper with `-Obfuscate`, three files are produced per run:\n\n| File | Contains | Share externally? |\n|------|----------|-------------------|\n| `*_SAFE_TO_SHARE.{md,html,json}` | Obfuscated names only | Yes |\n| `*_CONFIDENTIAL.{md,html,json}` | Real server/database/table names | **No** |\n| `*_CONFIDENTIAL_DECODE.sql` | Standalone T-SQL script to decode tokens | **No** |\n\n```powershell\n# Generate reports for 3 servers -- Markdown format, seeded obfuscation\n.\\Invoke-StatUpdateDiag.ps1 `\n    -Servers \"PROD-SQL01\", \"PROD-SQL02\", \"PROD-SQL03\" `\n    -Obfuscate `\n    -ObfuscationSeed \"acme-2026-q1\" `\n    -OutputFormat Markdown `\n    -OutputPath \"C:\\temp\\diag\"\n\n# Output:\n#   C:\\temp\\diag\\sp_StatUpdate_Diag_20260310_SAFE_TO_SHARE.md   \u003c-- send this\n#   C:\\temp\\diag\\sp_StatUpdate_Diag_20260310_CONFIDENTIAL.md    \u003c-- keep this\n#   C:\\temp\\diag\\sp_StatUpdate_Diag_20260310_CONFIDENTIAL_DECODE.sql\n\n# Also persist the map table on each server for later decoding\n.\\Invoke-StatUpdateDiag.ps1 `\n    -Servers \"PROD-SQL01\", \"PROD-SQL02\" `\n    -Obfuscate `\n    -ObfuscationSeed \"acme-2026-q1\" `\n    -ObfuscationMapTable \"dbo.DiagObfMap\" `\n    -OutputPath \"C:\\temp\\diag\"\n```\n\nWithout `-Obfuscate`, a single report file is produced (no suffix).\n\n#### Typical Workflow: Consultant Engagement\n\n```\n1. DBA runs:     Invoke-StatUpdateDiag.ps1 -Servers ... -Obfuscate -ObfuscationSeed \"...\"\n2. DBA sends:    *_SAFE_TO_SHARE.md to consultant (no real names visible)\n3. Consultant:   \"TBL_e4c1 has a C2 finding -- stat IX_STAT_9b3d fails every run\"\n4. DBA decodes:  Opens _CONFIDENTIAL_DECODE.sql in SSMS, searches for TBL_e4c1\n5. DBA finds:    TBL_e4c1 = dbo.OrderHistory, IX_STAT_9b3d = IX_OrderHistory_Date\n6. DBA fixes:    The actual object, shares updated SAFE_TO_SHARE report to confirm\n```\n\n### Decoding Obfuscated Results\n\nWhen a consultant returns findings referencing tokens like `TBL_e4c1`, you have two options:\n\n**Option A: Use the decode SQL file (no server access needed)**\n\nThe `_CONFIDENTIAL_DECODE.sql` file is a standalone T-SQL script with the full map in a temp table:\n\n```sql\n-- 1. Open _CONFIDENTIAL_DECODE.sql in SSMS and execute it (creates #ObfuscationMap)\n-- 2. Decode a specific token from the consultant's findings:\nSELECT * FROM #ObfuscationMap WHERE ObfuscatedName = N'TBL_e4c1';\n-- 3. Decode multiple tokens at once:\nSELECT * FROM #ObfuscationMap WHERE ObfuscatedName IN (N'TBL_e4c1', N'IX_STAT_9b3d', N'DB_7f2a');\n-- 4. Full map sorted by server:\nSELECT * FROM #ObfuscationMap ORDER BY ServerName, ObjectType, OriginalName;\n```\n\n**Option B: Query the persisted map table on the server**\n\nIf you used `@ObfuscationMapTable` (T-SQL) or `-ObfuscationMapTable` (PowerShell):\n\n```sql\n-- Decode a single token\nSELECT ObjectType, OriginalName, ObfuscatedName\nFROM dbo.DiagObfMap\nWHERE ObfuscatedName = N'TBL_e4c1';\n\n-- Export full map to CSV (useful for Excel cross-referencing)\n-- In SSMS: Results to File, then run:\nSELECT ObjectType, OriginalName, ObfuscatedName\nFROM dbo.DiagObfMap\nORDER BY ObjectType, OriginalName;\n\n-- Decode across multiple servers via linked servers\nSELECT 'PROD-SQL01' AS [Server], ObjectType, OriginalName, ObfuscatedName\nFROM [PROD-SQL01].master.dbo.DiagObfMap\nUNION ALL\nSELECT 'PROD-SQL02', ObjectType, OriginalName, ObfuscatedName\nFROM [PROD-SQL02].master.dbo.DiagObfMap\nORDER BY [Server], ObjectType, OriginalName;\n```\n\n**How obfuscation works:**\n- **With a seed**: Hashes are **deterministic** -- the same object always produces the same token across servers, runs, and time. This means `TBL_e4c1` in Monday's report is the same table as `TBL_e4c1` in Friday's report.\n- **Without a seed**: Hashes are random per run. Useful for one-off sharing but tokens can't be correlated across runs.\n- The map table **appends** on each run (no data loss from prior runs)\n- The `_CONFIDENTIAL_DECODE.sql` file is standalone -- works in any SSMS session, no server access needed\n- Without the seed, the map, or the decode file, obfuscated tokens **cannot** be reversed (HASHBYTES is one-way)\n\n### Custom Analysis\n\n```sql\n-- Last 90 days, only top 5 items, long-running threshold at 15 minutes\nEXEC dbo.sp_StatUpdate_Diag\n    @DaysBack = 90,\n    @TopN = 5,\n    @LongRunningMinutes = 15,\n    @ExpertMode = 1;\n\n-- CommandLog in a different database\nEXEC dbo.sp_StatUpdate_Diag @CommandLogDatabase = N'DBATools';\n\n-- Single result set mode (JSON) for programmatic consumption\nEXEC dbo.sp_StatUpdate_Diag @SingleResultSet = 1, @ExpertMode = 1;\n```\n\n### Multi-Server (PowerShell)\n\n```powershell\n.\\Invoke-StatUpdateDiag.ps1 `\n    -Servers \"Server1\", \"Server2,2500\", \"Server3\" `\n    -CommandLogDatabase \"Maintenance\" `\n    -OutputPath \".\\diag_output\" `\n    -OutputFormat Markdown\n```\n\nCross-server analysis detects version skew and parameter inconsistencies.\n\n### Diagnostic Checks\n\n| Severity | ID | Checks |\n|----------|----|--------|\n| CRITICAL | C1-C5 | Killed runs, repeated stat failures, time limit exhaustion, degrading throughput, sample rate degradation |\n| WARNING | W1-W10 | Suboptimal parameters, long-running stats, stale-stats backlog, overlapping runs, QS not effective, excessive overhead, mop-up ineffective, lock timeout ineffective, parameter churn |\n| INFO | I1-I5 | Run health trends, parameter history, top tables by cost, unused features, version history |\n| INFO | I6 | QS efficacy: \"10 of 10 highest-workload stats updated in first 1 minute\" |\n| INFO | I7 | QS inflection: before/after comparison when QS prioritization was enabled |\n| INFO | I8 | QS performance trend: per-stat CPU correlation across runs |\n| INFO | I10 | Recommended configuration: synthesized EXEC call based on diagnostic findings |\n| INFO | I11-I14 | Failure clustering, QS coverage drift, parallel opportunity, mop-up missing pagecount |\n\n### Diag Parameter Reference\n\n| Parameter | Default | Description |\n|-----------|---------|-------------|\n| `@DaysBack` | `30` | History window in days |\n| `@ExpertMode` | `0` | `0` = dashboard + recommendations, `1` = all 13 result sets |\n| `@SkipHistory` | `0` | `1` = skip persistent history table |\n| `@GradeOverrides` | `NULL` | Force grades or ignore categories: `'RELIABILITY=A, SPEED=IGNORE'` |\n| `@GradeWeights` | `NULL` | Custom category weights (auto-normalized to 100%): `'COMPLETION=50, WORKLOAD=50'` |\n| `@Obfuscate` | `0` | `1` = hash all names for external sharing |\n| `@ObfuscationSeed` | `NULL` | Salt for deterministic hashing |\n| `@ObfuscationMapTable` | `NULL` | Persist obfuscation map to a table |\n| `@EfficacyDaysBack` | `NULL` | QS efficacy broad window (NULL = @DaysBack) |\n| `@EfficacyDetailDays` | `NULL` | QS efficacy close-up window (NULL = 14) |\n| `@LongRunningMinutes` | `10` | Threshold for long-running stat detection |\n| `@FailureThreshold` | `3` | Same stat failing N+ times = CRITICAL |\n| `@TimeLimitExhaustionPct` | `80` | Warn if \u003eX% of runs hit time limit |\n| `@ThroughputWindowDays` | `7` | Window for throughput trend comparison |\n| `@TopN` | `20` | Top N items in detail result sets |\n| `@CommandLogDatabase` | `NULL` | CommandLog location (NULL = current DB) |\n| `@SingleResultSet` | `0` | `1` = JSON-formatted single result set |\n| `@Debug` | `0` | `1` = verbose output |\n\n## Extended Events\n\nAn XE session is included for runtime troubleshooting:\n\n```sql\n-- Create and start (see sp_StatUpdate_XE_Session.sql)\nALTER EVENT SESSION [sp_StatUpdate_Monitor] ON SERVER STATE = START;\n```\n\nCaptures UPDATE STATISTICS commands, errors, lock waits, lock escalation, and long-running statements.\n\n## Migrating from v2\n\nv3 simplifies the API from 58 to 33 input parameters.  Most v2 scripts work with minor changes.\n\n### Quick Migration Guide\n\n| v2 Parameter | v3 Equivalent |\n|--------------|---------------|\n| `@QueryStorePriority = 'Y', @QueryStoreMetric = 'CPU'` | `@QueryStore = 'CPU'` |\n| `@DaysStaleThreshold = 7` | `@StaleHours = 168` |\n| `@HoursStaleThreshold = 48` | `@StaleHours = 48` |\n| `@Preset = 'NIGHTLY_MAINTENANCE'` | `@Preset = 'NIGHTLY'` |\n| `@Preset = 'WAREHOUSE_AGGRESSIVE'` | `@Preset = 'WAREHOUSE'` |\n| `@TieredThresholds = 1` | Preset-controlled (always on for DEFAULT/NIGHTLY/WEEKLY_FULL) |\n| `@ThresholdLogic = 'OR'` | Preset-controlled |\n| `@ModificationPercent = 10` | Preset-controlled |\n| `@MaxConsecutiveFailures = 5` | Preset-controlled |\n| `@DelayBetweenStats = 2` | Preset-controlled (OLTP_LIGHT uses 2s delay) |\n| `@LockTimeout = 10` | Preset-controlled (OLTP_LIGHT uses 10s) |\n| `@CleanupOrphanedRuns = 'Y'` | Always on |\n| `@PersistSamplePercent = 'Y'` | Always on (when supported) |\n| `@IncludeSystemObjects = 'Y'` | Removed (system objects excluded) |\n| `@IncludeIndexedViews = 'Y'` | Removed (indexed views always included) |\n| `@GroupByJoinPattern = 'Y'` | Removed |\n| `@ExposeProgressToAllSessions = 'Y'` | Removed |\n| `@CompletionNotifyTable` | Removed |\n| `@LogSkippedToCommandLog` | Removed |\n| `@ReturnDetailedResults` | Removed |\n| `@ProgressLogInterval` | Removed |\n| `@StatisticsFromTable` | Removed |\n| `@FilteredStatsMode` | Preset-controlled |\n\n### Example: v2 Agent Job to v3\n\n```sql\n-- v2 (old)\nEXEC dbo.sp_StatUpdate\n    @Databases = N'USER_DATABASES',\n    @QueryStorePriority = N'Y',\n    @QueryStoreMetric = N'CPU',\n    @TieredThresholds = 1,\n    @TimeLimit = 3600,\n    @SortOrder = N'QUERY_STORE',\n    @MopUpPass = N'Y',\n    @LogToTable = N'Y';\n\n-- v3 (new) -- same behavior, fewer params\nEXEC dbo.sp_StatUpdate\n    @Preset = N'NIGHTLY',\n    @Databases = N'USER_DATABASES';\n```\n\n## Version History\n\n- **3.2.1.2026.0417** - Phase 5 bug fix: DIRECT_STRING discovery path (`@Statistics` param) now honors `@TargetNorecompute` and `@ExcludeStatistics` filters, matching staged discovery Phase 1 behavior (gh-492).\n- **3.2.2026.0417** - Phase 4 quality/perf batch (8 issues): Phase 6 plan feedback bounded by time + top-N, 5 COUNT_BIG scans collapsed to one SUM(CASE), orphan cleanup materializes END labels, per-table sys.partitions cache, per-database warning block skipped when DB has zero stats, MAX_GRANT_PERCENT token-substituted, 6 empty CATCH blocks surface to `@WarningsOut`, threshold-logic explanation gated behind `@Debug = 1` (gh-460, 463, 464, 466-470).\n- **3.1.2026.0417** - Phase 1 correctness batch (9 issues): parallel early-return paths set OUTPUT params + summary result set, LOCK_TIMEOUT restored after forced-plan check, `@QueryStore = AVG_CPU` sorts by average (not total), `@parameter_fingerprint` + additional correctness fixes (gh-451..459). Also Phase 2/3 diag correctness (10 issues) in sp_StatUpdate_Diag 2026.04.17.1 (gh-471..480).\n- **3.0.2026.0407** - v3 architecture: preset-first API.  33 input params (was 58 in v2) + 10 OUTPUT params.  25 params absorbed into `@i_` internal variables controlled by presets (DEFAULT, NIGHTLY, WEEKLY_FULL, OLTP_LIGHT, WAREHOUSE).  New `@QueryStore` param replaces `@QueryStorePriority` + `@QueryStoreMetric`.  `@StaleHours` replaces `@DaysStaleThreshold` + `@HoursStaleThreshold`.  Table-driven validation, 6-phase staged discovery only (no legacy fallback), unified mop-up filters.  Full behavioral parity with v2.37.\n- **2.37.2026.0327** - WAITS enrichment unbounded XML parsing fix (metric gate + TopPlans limit), Phase 6 debug gate.\n- **2.35.2026.0327** - @QueryStoreMetric WAITS + diag memory grant trending.\n- **2.34.2026.0326** - QS discovery metric gaps: 6 issues resolved.\n- **2.29.2026.0325** - Em dashes, AG sync-only redo, mop-up safety + 8 new diag checks (W8-W10, C5, I11-I14).\n- **2.24.2026.0324** - Staged discovery hardening, legacy QS consistency, @MopUpPass, @MopUpMinRemainingSeconds.\n- **2.23.2026.0324** - @QueryStoreTopPlans (Phase 6 XML parsing limit), early bail-out.\n- **2.22.2026.0320** - AscendingKeyBoost, CE QUERY_OPTIMIZER_HOTFIXES, APC awareness, cursor-to-set-based.\n- **Diag 2026.03.23.1** - I10 RECOMMENDED_CONFIG, @ExpertMode, Executive Dashboard A-F grades, persistent history.\n- **2.16.2026.0308** - QS Efficacy Trending, ProcessingPosition, diag RS 9-13, I6/I7/I8.\n- **2.14.2026.0304** - Bulk issue resolution (42 issues). @OrphanedRunThresholdHours. AG/safety guards.\n- **2.8.2026.0302** - Comprehensive sweep (31 issues). @IncludeIndexedViews, @LogSkippedToCommandLog, QS forced plan warning.\n- **2.7.2026.0302** - AG redo queue pause, tempdb pressure check.\n- **2.4.2026.0302** - Region markers, collation-aware comparisons, per-phase timing, 12-issue bug-fix sprint.\n- **2.0.2026.0212** - Environment Intelligence, staged discovery, diagnostic tool (sp_StatUpdate_Diag).\n- **1.9.2026.0206** - Status/StatusMessage columns, batch QS enrichment.\n- **1.5.2026.0120** - CRITICAL: Fixed @ExcludeStatistics filter, incremental partition targeting.\n- **1.4.2026.0119** - Query Store prioritization, filtered stats handling.\n- **1.3.2026.0119** - Multi-database support, OUTPUT parameters, return codes.\n- **1.0.2026.0117** - Initial public release.\n\n## When to Use This (vs IndexOptimize)\n\n**IndexOptimize** is battle-tested and handles indexes + stats together. Use it for general maintenance.\n\n**sp_StatUpdate** is for when you need:\n- Priority ordering (worst stats first)\n- Time-limited runs with graceful stops\n- NORECOMPUTE targeting\n- Query Store-driven prioritization (10 metrics, tunable plan parsing)\n- Adaptive sampling for problematic stats\n- AG-safe maintenance with redo queue awareness\n- Mop-up pass for thorough coverage\n- Programmatic access to results via OUTPUT parameters\n\n## License\n\nMIT License - see [LICENSE](LICENSE) for details.\n\nBased on patterns from [Ola Hallengren's SQL Server Maintenance Solution](https://ola.hallengren.com) (MIT License).\n\n## Acknowledgments\n\n- [Ola Hallengren](https://ola.hallengren.com) - sp_StatUpdate wouldn't exist without his SQL Server Maintenance Solution. We use his CommandLog table, Queue patterns, and database selection syntax. If you're not already using his tools, start there.\n- [Brent Ozar](https://www.brentozar.com) - years of emphasizing stats over index rebuilds, First Responder Kit, and community education.\n- [Erik Darling](https://www.erikdarling.com) - T-SQL coding style and performance insights. His diagnostic tools are excellent - I'm particularly fond of sp_LogHunter and sp_QuickieStore.\n- [Tiger Team's AdaptiveIndexDefrag](https://github.com/microsoft/tigertoolbox) - the 5-tier adaptive threshold formula\n- [Colleen Morrow](https://www.sqlservercentral.com/blogs/better-living-thru-powershell-update-statistics-in-parallel) - parallel statistics maintenance concept\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnanodba%2Fsp_statupdate","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fnanodba%2Fsp_statupdate","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnanodba%2Fsp_statupdate/lists"}