{"id":32705500,"url":"https://github.com/zig-utils/zig-benchmarks","last_synced_at":"2026-04-01T17:08:25.944Z","repository":{"id":320952383,"uuid":"1083829998","full_name":"zig-utils/zig-benchmarks","owner":"zig-utils","description":"A modern, performant, and beautiful benchmark framework for Zig.","archived":false,"fork":false,"pushed_at":"2026-03-18T21:31:15.000Z","size":84,"stargazers_count":5,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-03-18T22:43:09.414Z","etag":null,"topics":["benchmark-framework","benchmarks","mitata","testing","zig"],"latest_commit_sha":null,"homepage":"","language":"Zig","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/zig-utils.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-10-26T19:45:38.000Z","updated_at":"2026-03-18T21:24:11.000Z","dependencies_parsed_at":"2025-10-27T00:13:12.121Z","dependency_job_id":"f1ca13cb-f714-458c-967b-95961434a53a","html_url":"https://github.com/zig-utils/zig-benchmarks","commit_stats":null,"previous_names":["zig-utils/zig-benchmarks"],"tags_count":2,"template":false,"template_full_name":null,"purl":"pkg:github/zig-utils/zig-benchmarks","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zig-utils%2Fzig-benchmarks","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zig-utils%2Fzig-benchmarks/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zig-utils%2Fzig-benchmarks/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zig-utils%2Fzig-benchmarks/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/zig-utils","download_url":"https://codeload.github.com/zig-utils/zig-benchmarks/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zig-utils%2Fzig-benchmarks/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31290538,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-01T13:12:26.723Z","status":"ssl_error","status_checked_at":"2026-04-01T13:12:25.102Z","response_time":53,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchmark-framework","benchmarks","mitata","testing","zig"],"created_at":"2025-11-02T01:01:49.242Z","updated_at":"2026-04-01T17:08:25.930Z","avatar_url":"https://github.com/zig-utils.png","language":"Zig","readme":"# Zig Bench\n\nA modern, performant, and beautiful benchmark framework for Zig, inspired by [mitata](https://github.com/evanwashere/mitata).\n\n## Features\n\n- **High Precision Timing** - Uses Zig's built-in high-resolution timer for accurate measurements\n- **Statistical Analysis** - Calculates mean, standard deviation, min/max, and percentiles (P50, P75, P99)\n- **Beautiful CLI Output** - Colorized output with clear formatting\n- **Flexible Configuration** - Customize warmup iterations, min/max iterations, and minimum time\n- **Async Support** - Built-in support for benchmarking async/error-handling functions\n- **Zero Dependencies** - Uses only Zig's standard library\n- **Automatic Iteration Adjustment** - Intelligently adjusts iterations based on operation speed\n- **Comparative Analysis** - Automatically identifies and highlights the fastest benchmark\n\n## Installation\n\n### Using Zig Package Manager\n\nAdd to your `build.zig.zon`:\n\n```zig\n.{\n    .name = \"my-project\",\n    .version = \"0.1.0\",\n    .dependencies = .{\n        .bench = .{\n            .url = \"https://github.com/yourusername/zig-bench/archive/main.tar.gz\",\n            .hash = \"...\",\n        },\n    },\n}\n```\n\nThen in your `build.zig`:\n\n```zig\nconst bench = b.dependency(\"bench\", .{\n    .target = target,\n    .optimize = optimize,\n});\n\nexe.root_module.addImport(\"bench\", bench.module(\"bench\"));\n```\n\n### Manual Installation\n\nClone this repository and add it as a module in your project:\n\n```zig\nconst bench_module = b.addModule(\"bench\", .{\n    .root_source_file = b.path(\"path/to/zig-bench/src/bench.zig\"),\n});\n\nexe.root_module.addImport(\"bench\", bench_module);\n```\n\n## Quick Start\n\n### Basic Benchmark\n\n```zig\nconst std = @import(\"std\");\nconst bench = @import(\"bench\");\n\nvar global_sum: u64 = 0;\n\nfn benchmarkLoop() void {\n    var sum: u64 = 0;\n    var i: u32 = 0;\n    while (i \u003c 1000) : (i += 1) {\n        sum += i;\n    }\n    global_sum = sum;\n}\n\npub fn main() !void {\n    const allocator = std.heap.page_allocator;\n\n    var suite = bench.BenchmarkSuite.init(allocator);\n    defer suite.deinit();\n\n    try suite.add(\"Loop 1000 times\", benchmarkLoop);\n    try suite.run();\n}\n```\n\n### Multiple Benchmarks\n\n```zig\nconst std = @import(\"std\");\nconst bench = @import(\"bench\");\n\nvar result: u64 = 0;\n\nfn fibonacci(n: u32) u64 {\n    if (n \u003c= 1) return n;\n    return fibonacci(n - 1) + fibonacci(n - 2);\n}\n\nfn benchFib20() void {\n    result = fibonacci(20);\n}\n\nfn benchFib25() void {\n    result = fibonacci(25);\n}\n\nfn benchFib30() void {\n    result = fibonacci(30);\n}\n\npub fn main() !void {\n    const allocator = std.heap.page_allocator;\n\n    var suite = bench.BenchmarkSuite.init(allocator);\n    defer suite.deinit();\n\n    try suite.add(\"Fibonacci(20)\", benchFib20);\n    try suite.add(\"Fibonacci(25)\", benchFib25);\n    try suite.add(\"Fibonacci(30)\", benchFib30);\n\n    try suite.run();\n}\n```\n\n### Custom Options\n\nCustomize warmup, iterations, and timing:\n\n```zig\nconst std = @import(\"std\");\nconst bench = @import(\"bench\");\n\nfn slowOperation() void {\n    var sum: u64 = 0;\n    var i: u32 = 0;\n    while (i \u003c 10_000_000) : (i += 1) {\n        sum += i;\n    }\n}\n\npub fn main() !void {\n    const allocator = std.heap.page_allocator;\n\n    var suite = bench.BenchmarkSuite.init(allocator);\n    defer suite.deinit();\n\n    try suite.addWithOptions(\"Slow Operation\", slowOperation, .{\n        .warmup_iterations = 2,      // Fewer warmup iterations\n        .min_iterations = 5,          // Minimum iterations to run\n        .max_iterations = 50,         // Maximum iterations\n        .min_time_ns = 2_000_000_000, // Run for at least 2 seconds\n    });\n\n    try suite.run();\n}\n```\n\n### Async Benchmarks\n\nBenchmark functions that return errors:\n\n```zig\nconst std = @import(\"std\");\nconst bench = @import(\"bench\");\nconst async_bench = bench.async_bench;\n\nvar result: []u8 = undefined;\n\nfn asyncOperation() !void {\n    var gpa = std.heap.GeneralPurposeAllocator(.{}){};\n    defer _ = gpa.deinit();\n    const allocator = gpa.allocator();\n\n    var buffer = try allocator.alloc(u8, 1024);\n    defer allocator.free(buffer);\n\n    @memset(buffer, 'A');\n    result = buffer;\n}\n\npub fn main() !void {\n    const allocator = std.heap.page_allocator;\n\n    var suite = async_bench.AsyncBenchmarkSuite.init(allocator);\n    defer suite.deinit();\n\n    try suite.add(\"Async Buffer Allocation\", asyncOperation);\n    try suite.run();\n}\n```\n\n## API Reference\n\n### BenchmarkSuite\n\nThe main interface for running multiple benchmarks.\n\n```zig\npub const BenchmarkSuite = struct {\n    pub fn init(allocator: Allocator) BenchmarkSuite\n    pub fn deinit(self: *BenchmarkSuite) void\n    pub fn add(self: *BenchmarkSuite, name: []const u8, func: *const fn () void) !void\n    pub fn addWithOptions(self: *BenchmarkSuite, name: []const u8, func: *const fn () void, opts: BenchmarkOptions) !void\n    pub fn run(self: *BenchmarkSuite) !void\n};\n```\n\n### BenchmarkOptions\n\nConfiguration options for individual benchmarks.\n\n```zig\npub const BenchmarkOptions = struct {\n    warmup_iterations: u32 = 5,           // Number of warmup runs\n    min_iterations: u32 = 10,             // Minimum iterations to execute\n    max_iterations: u32 = 10_000,         // Maximum iterations to execute\n    min_time_ns: u64 = 1_000_000_000,     // Minimum time to run (1 second)\n    baseline: ?[]const u8 = null,         // Reserved for future baseline comparison\n};\n```\n\n### BenchmarkResult\n\nResults from a benchmark run containing statistical data.\n\n```zig\npub const BenchmarkResult = struct {\n    name: []const u8,\n    samples: std.ArrayList(u64),\n    mean: f64,           // Mean execution time in nanoseconds\n    stddev: f64,         // Standard deviation\n    min: u64,            // Minimum time\n    max: u64,            // Maximum time\n    p50: u64,            // 50th percentile (median)\n    p75: u64,            // 75th percentile\n    p99: u64,            // 99th percentile\n    ops_per_sec: f64,    // Operations per second\n    iterations: u64,     // Total iterations executed\n};\n```\n\n### AsyncBenchmarkSuite\n\nFor benchmarking functions that can return errors.\n\n```zig\npub const AsyncBenchmarkSuite = struct {\n    pub fn init(allocator: Allocator) AsyncBenchmarkSuite\n    pub fn deinit(self: *AsyncBenchmarkSuite) void\n    pub fn add(self: *AsyncBenchmarkSuite, name: []const u8, func: *const fn () anyerror!void) !void\n    pub fn addWithOptions(self: *AsyncBenchmarkSuite, name: []const u8, func: *const fn () anyerror!void, opts: BenchmarkOptions) !void\n    pub fn run(self: *AsyncBenchmarkSuite) !void\n};\n```\n\n## Examples\n\nThe `examples/` directory contains several complete examples:\n\n- `basic.zig` - Simple benchmarks comparing different operations\n- `async.zig` - Async/error-handling benchmark examples\n- `custom_options.zig` - Customizing benchmark parameters\n- `filtering_baseline.zig` - Benchmark filtering and baseline saving\n- `allocators.zig` - Comparing different allocator performance\n- `advanced_features.zig` - Complete demonstration of all Phase 1 advanced features\n- `phase2_features.zig` - Complete demonstration of all Phase 2 advanced features (groups, warmup, outliers, parameterized, parallel)\n\nRun examples:\n\n```bash\n# Build and run all examples\nzig build examples\n\n# Run specific example\nzig build run-basic\nzig build run-async\nzig build run-custom_options\nzig build run-filtering_baseline\nzig build run-allocators\nzig build run-advanced_features\nzig build run-phase2_features\n```\n\n## Output Format\n\nZig Bench provides beautiful, colorized output:\n\n```\nZig Benchmark Suite\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n\n▶ Running: Fibonacci(20)\n  Iterations: 1000\n  Mean:       127.45 µs\n  Std Dev:    12.34 µs\n  Min:        115.20 µs\n  Max:        156.78 µs\n  P50:        125.90 µs\n  P75:        132.45 µs\n  P99:        145.67 µs\n  Ops/sec:    7.85k\n\nSummary\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n  ✓ Fibonacci(20) - fastest\n  • Fibonacci(25) - 5.47x slower\n  • Fibonacci(30) - 37.21x slower\n```\n\n## Best Practices\n\n1. **Avoid I/O Operations**: Benchmark pure computation when possible\n2. **Use Global Variables**: Store results in global variables to prevent compiler optimization\n3. **Appropriate Iterations**: Fast operations need more iterations, slow operations need fewer\n4. **Warmup Phase**: Always include warmup iterations for JIT/cache warming\n5. **Isolate Benchmarks**: Each benchmark should test one specific operation\n6. **Minimize Allocations**: Be mindful of memory allocations in the hot path\n\n## Performance Considerations\n\n- Uses Zig's `std.time.Timer` for high-resolution timing\n- Minimal overhead - the framework itself adds negligible time\n- No heap allocations in the hot benchmark loop\n- Efficient statistical calculations\n- Automatic iteration adjustment based on operation speed\n\n## Building from Source\n\n```bash\n# Clone the repository\ngit clone https://github.com/yourusername/zig-bench.git\ncd zig-bench\n\n# Run all tests (unit + integration)\nzig build test\n\n# Run only unit tests\nzig build test-unit\n\n# Run only integration tests\nzig build test-integration\n\n# Build all examples\nzig build examples\n\n# Run specific example\nzig build run-basic\n```\n\n## Requirements\n\n- Zig 0.15.0 or later\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## License\n\nMIT License - see LICENSE file for details\n\n## Acknowledgments\n\nInspired by [mitata](https://github.com/evanwashere/mitata), a beautiful JavaScript benchmarking library.\n\n## Advanced Features\n\nZig Bench includes a comprehensive suite of advanced features for professional benchmarking.\n\n### Organization \u0026 Workflow\n\n- **Benchmark Groups/Categories** - Organize related benchmarks into logical groups\n- **Benchmark Filtering** - Run specific benchmarks by name pattern\n- **Parameterized Benchmarks** - Test performance across different input sizes or parameters\n\n### Performance Analysis\n\n- **Automatic Warmup Detection** - Intelligently determine optimal warmup iterations\n- **Statistical Outlier Detection** - Remove anomalies using IQR, Z-score, or MAD methods\n- **Memory Profiling** - Track memory allocations, peak usage, and allocation counts\n- **Multi-threaded Benchmarks** - Test parallel performance and thread scalability\n\n### Comparison \u0026 Regression Detection\n\n- **Historical Baseline Comparison** - Compare current results against saved baselines\n- **Regression Detection** - Automatic detection of performance regressions with configurable thresholds\n- **Custom Allocator Benchmarking** - Compare performance across different allocators\n\n### Export \u0026 Visualization\n\n- **JSON/CSV Export** - Export benchmark results to standard formats\n- **Flamegraph Support** - Generate flamegraph-compatible output for profiling tools\n- **Web Dashboard** - Interactive HTML dashboard for visualizing results\n\n### CI/CD Integration\n\n- **GitHub Actions Workflow** - Ready-to-use workflow with PR comments and artifact uploads\n- **GitLab CI Template** - Complete pipeline with Pages dashboard generation\n- **CI/CD Helpers** - Built-in support for GitHub Actions, GitLab CI, and generic CI systems\n\n### Export Results to JSON/CSV\n\n```zig\nconst export_mod = @import(\"export\");\n\nconst exporter = export_mod.Exporter.init(allocator);\n\n// Export to JSON\ntry exporter.exportToFile(results, \"benchmark_results.json\", .json);\n\n// Export to CSV\ntry exporter.exportToFile(results, \"benchmark_results.csv\", .csv);\n```\n\n### Baseline Comparison \u0026 Regression Detection\n\n```zig\nconst comparison_mod = @import(\"comparison\");\n\n// Create comparator with 10% regression threshold\nconst comparator = comparison_mod.Comparator.init(allocator, 10.0);\n\n// Compare current results against baseline\nconst comparisons = try comparator.compare(results, \"baseline.json\");\ndefer allocator.free(comparisons);\n\n// Print comparison report\ntry comparator.printComparison(stdout, comparisons);\n```\n\n### Memory Profiling\n\n```zig\nconst memory_profiler = @import(\"memory_profiler\");\n\n// Create profiling allocator\nvar profiling_allocator = memory_profiler.ProfilingAllocator.init(base_allocator);\nconst tracked_allocator = profiling_allocator.allocator();\n\n// Run benchmark with tracked allocator\n// ... benchmark code ...\n\n// Get memory statistics\nconst stats = profiling_allocator.getStats();\n// stats contains: peak_allocated, total_allocated, total_freed,\n//                 current_allocated, allocation_count, free_count\n```\n\n### CI/CD Integration\n\n```zig\nconst ci = @import(\"ci\");\n\n// Detect CI environment automatically\nconst ci_format = ci.detectCIEnvironment();\n\n// Create CI helper with configuration\nvar ci_helper = ci.CIHelper.init(allocator, .{\n    .fail_on_regression = true,\n    .regression_threshold = 10.0,\n    .baseline_path = \"baseline.json\",\n    .output_format = ci_format,\n});\n\n// Generate CI-specific summary\ntry ci_helper.generateSummary(results);\n\n// Check for regressions\nconst has_regression = try ci_helper.checkRegressions(results);\nif (has_regression and ci_helper.shouldFailBuild(has_regression)) {\n    std.process.exit(1); // Fail the build\n}\n```\n\n### Flamegraph Generation\n\n```zig\nconst flamegraph_mod = @import(\"flamegraph\");\n\nconst flamegraph_gen = flamegraph_mod.FlamegraphGenerator.init(allocator);\n\n// Generate folded stack format for flamegraph.pl\ntry flamegraph_gen.generateFoldedStacks(\"benchmark.folded\", \"MyBenchmark\", 10000);\n\n// Generate profiler instructions\ntry flamegraph_gen.generateInstructions(stdout, \"my_executable\");\n\n// Detect available profilers\nconst recommended = flamegraph_mod.ProfilerIntegration.recommendProfiler();\n```\n\n### Benchmark Filtering\n\n```zig\nvar suite = bench.BenchmarkSuite.init(allocator);\ndefer suite.deinit();\n\ntry suite.add(\"Fast Operation\", fastOp);\ntry suite.add(\"Slow Operation\", slowOp);\ntry suite.add(\"Fast Algorithm\", fastAlgo);\n\n// Only run benchmarks matching \"Fast\"\nsuite.setFilter(\"Fast\");\n\ntry suite.run(); // Only runs \"Fast Operation\" and \"Fast Algorithm\"\n```\n\n### Custom Allocator Benchmarking\n\n```zig\nvar suite = bench.BenchmarkSuite.init(allocator);\ndefer suite.deinit();\n\n// Benchmark with custom allocator\ntry suite.addWithAllocator(\"GPA Benchmark\", benchmarkFunc, gpa_allocator);\ntry suite.addWithAllocator(\"Arena Benchmark\", benchmarkFunc, arena_allocator);\n\ntry suite.run();\n```\n\n## FAQ\n\n### Why store results in global variables?\n\nModern compilers are very smart at optimizing away \"dead code\". If your benchmark function's result isn't used, the compiler might optimize away the entire function. Storing results in global variables prevents this.\n\n### How are iterations determined?\n\nThe framework runs benchmarks until either:\n1. The `max_iterations` limit is reached, OR\n2. The `min_time_ns` has elapsed AND at least `min_iterations` have run\n\nThis ensures fast operations get enough samples while slow operations don't take too long.\n\n### Can I benchmark allocations?\n\nYes! Just be aware that allocation benchmarks should:\n1. Clean up allocations within the benchmark function\n2. Use realistic allocation patterns\n3. Consider using custom options to adjust iteration counts\n\n### How accurate are the measurements?\n\nMeasurements use Zig's high-resolution timer which typically has nanosecond precision on modern systems. However, actual accuracy depends on:\n- System load\n- CPU frequency scaling\n- Cache effects\n- Background processes\n\nRun multiple times and look for consistency in results.\n\n## Phase 2 Advanced Features\n\n### Benchmark Groups\n\nOrganize related benchmarks into categories for better organization:\n\n```zig\nconst groups = @import(\"groups\");\n\nvar manager = groups.GroupManager.init(allocator);\ndefer manager.deinit();\n\n// Create groups\nvar algorithms = try manager.addGroup(\"Algorithms\");\ntry algorithms.add(\"QuickSort\", quicksortBench);\ntry algorithms.add(\"MergeSort\", mergesortBench);\n\nvar io = try manager.addGroup(\"I/O Operations\");\ntry io.add(\"File Read\", fileReadBench);\ntry io.add(\"File Write\", fileWriteBench);\n\n// Run all groups\ntry manager.runAll();\n\n// Or run specific group\ntry manager.runGroup(\"Algorithms\");\n```\n\n### Automatic Warmup Detection\n\nLet the framework automatically determine optimal warmup iterations:\n\n```zig\nconst warmup = @import(\"warmup\");\n\nconst detector = warmup.WarmupDetector.initDefault();\nconst result = try detector.detect(myBenchFunc, allocator);\n\nstd.debug.print(\"Optimal warmup: {d} iterations\\n\", .{result.optimal_iterations});\nstd.debug.print(\"Stabilized: {}\\n\", .{result.stabilized});\nstd.debug.print(\"CV: {d:.4}\\n\", .{result.final_cv});\n```\n\n### Outlier Detection and Removal\n\nClean benchmark data by removing statistical outliers:\n\n```zig\nconst outliers = @import(\"outliers\");\n\n// Configure outlier detection\nconst config = outliers.OutlierConfig{\n    .method = .iqr,  // or .zscore, .mad\n    .iqr_multiplier = 1.5,\n};\n\nconst detector = outliers.OutlierDetector.init(config);\nvar result = try detector.detectAndRemove(samples, allocator);\ndefer result.deinit();\n\nstd.debug.print(\"Removed {d} outliers ({d:.2}%)\\n\", .{\n    result.outlier_count,\n    result.outlier_percentage,\n});\n```\n\n### Parameterized Benchmarks\n\nTest performance across different input sizes:\n\n```zig\nconst param = @import(\"parameterized\");\n\n// Define sizes to test\nconst sizes = [_]usize{ 10, 100, 1000, 10000 };\n\n// Create parameterized benchmark\nvar suite = try param.sizeParameterized(\n    allocator,\n    \"Array Sort\",\n    arraySortBench,\n    \u0026sizes,\n);\ndefer suite.deinit();\n\ntry suite.run();\n```\n\n### Multi-threaded Benchmarks\n\nMeasure parallel performance and scalability:\n\n```zig\nconst parallel = @import(\"parallel\");\n\n// Single parallel benchmark\nconst config = parallel.ParallelConfig{\n    .thread_count = 4,\n    .iterations_per_thread = 1000,\n};\n\nconst pb = parallel.ParallelBenchmark.init(allocator, \"Parallel Op\", func, config);\nvar result = try pb.run();\ndefer result.deinit();\n\ntry parallel.ParallelBenchmark.printResult(\u0026result);\n\n// Scalability test across thread counts\nconst thread_counts = [_]usize{ 1, 2, 4, 8 };\nconst scalability = parallel.ScalabilityTest.init(\n    allocator,\n    \"Scalability\",\n    func,\n    \u0026thread_counts,\n    1000,\n);\ntry scalability.run();\n```\n\n### CI/CD Integration\n\n#### GitHub Actions\n\nCopy `.github/workflows/benchmarks.yml` to your repository for automatic benchmarking on every push/PR:\n\n- Runs benchmarks on multiple platforms\n- Compares against baseline\n- Posts results as PR comments\n- Uploads artifacts\n\n#### GitLab CI\n\nCopy `.gitlab-ci.yml` to your repository for GitLab CI integration:\n\n- Multi-stage pipeline (build, test, benchmark, report)\n- Automatic baseline comparison\n- GitLab Pages dashboard generation\n- Regression detection\n\n### Web Dashboard\n\nOpen `web/dashboard.html` in a browser to visualize benchmark results:\n\n- Interactive charts and graphs\n- Load results from JSON files or URLs\n- Compare multiple benchmark runs\n- Export/share visualizations\n\nTo use:\n1. Run benchmarks and generate `benchmark_results.json`\n2. Open `web/dashboard.html` in a browser\n3. Load the JSON file or use demo data\n4. Explore interactive visualizations\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzig-utils%2Fzig-benchmarks","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fzig-utils%2Fzig-benchmarks","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzig-utils%2Fzig-benchmarks/lists"}