{"id":48783487,"url":"https://github.com/cakmoel/resilio","last_synced_at":"2026-04-13T15:03:15.482Z","repository":{"id":332028594,"uuid":"1128339766","full_name":"cakmoel/resilio","owner":"cakmoel","description":"Professional technology-agnostic load testing suite built for performance engineering and durability auditing. Implements research-based methodologies (Jain, 1991) and ISO 25010 standards to validate speed, endurance, and scalability across any backend stack.","archived":false,"fork":false,"pushed_at":"2026-01-12T05:52:03.000Z","size":292,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-01-12T11:02:29.216Z","etag":null,"topics":["apachebench","benchmarking","devops-tools","endurance-testing","load-testing","performance-testing","quality-assurance","reliability-engineering","scalability","sre","stress-testing","tech-agnostic"],"latest_commit_sha":null,"homepage":"https://s.id/resilio","language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/cakmoel.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE.md","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-01-05T13:47:32.000Z","updated_at":"2026-01-12T05:52:07.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/cakmoel/resilio","commit_stats":null,"previous_names":["cakmoel/resilio"],"tags_count":3,"template":false,"template_full_name":null,"purl":"pkg:github/cakmoel/resilio","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cakmoel%2Fresilio","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cakmoel%2Fresilio/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cakmoel%2Fresilio/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cakmoel%2Fresilio/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/cakmoel","download_url":"https://codeload.github.com/cakmoel/resilio/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cakmoel%2Fresilio/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31757482,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-13T13:27:56.013Z","status":"ssl_error","status_checked_at":"2026-04-13T13:21:23.512Z","response_time":93,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["apachebench","benchmarking","devops-tools","endurance-testing","load-testing","performance-testing","quality-assurance","reliability-engineering","scalability","sre","stress-testing","tech-agnostic"],"created_at":"2026-04-13T15:02:59.247Z","updated_at":"2026-04-13T15:03:15.472Z","avatar_url":"https://github.com/cakmoel.png","language":"Shell","readme":"# Resilio\n\n**High-Performance Load Testing Suite for Web Durability and Speed**\n\n[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)\n[![Version](https://img.shields.io/badge/version-6.3.0-green.svg)](CHANGELOG.md)\n[![SLT Engine](https://img.shields.io/badge/SLT-v2.2-blue.svg)](bin/slt.sh)\n![CI](https://github.com/cakmoel/resilio/actions/workflows/ci.yml/badge.svg)\n\n\n---\n\n## Overview\n\nResilio is a professional-grade performance engineering toolkit designed for QA Engineers, Developers, and DevOps practitioners. It provides a structured, technology-agnostic methodology to measure the speed, endurance, and scalability of web applications and APIs.\n\nBy leveraging the reliability of ApacheBench and adding layers of statistical analysis, automated hypothesis testing, and research-based methodologies, Resilio transforms raw network data into high-fidelity performance intelligence.\n\n### Why Resilio?\n\n- **Research-Based Methodology**: Implements ISO 25010 standards and academic frameworks (Jain, 1991; Welch, 1947; Mann \u0026 Whitney, 1947)\n- **Advanced Statistical Testing**: Automatic selection between parametric (Welch's t-test) and non-parametric (Mann-Whitney U) methods\n- **Intelligent Test Selection**: Automatically chooses the best statistical test based on data distribution\n- **Technology-Agnostic**: Tests any web application via HTTP protocol (PHP, Node.js, Python, Go, Java, Ruby, .NET, Rust)\n- **Automated Regression Detection**: Compare against baselines with statistical hypothesis testing\n- **Hybrid Baseline Management**: Git-integrated for production, local-only for development\n- **Comprehensive Metrics**: RPS, percentiles (P50/P95/P99), latency, stability (CV), and error rates\n\n---\n\n## 🆕 What's New in v6.3\n\n### New Feature: Iteration Delay for Rate Limiting\n\nv6.3 introduces a new configurable parameter for `slt.sh` to control the pacing of your load tests.\n\n#### Key Benefits:\n*   **Controlled Test Pacing:** Prevent overwhelming target systems by introducing configurable pauses between test cycles.\n*   **Reduced System Load:** Space out test requests to simulate more realistic user behavior or to comply with system capacity limits.\n*   **Improved Stability:** Help maintain the stability of the system under test during prolonged load testing by giving it time to recover between iterations.\n\n#### How to Use:\nYou can configure the delay by setting the `ITERATION_DELAY_SECONDS` environment variable before running `slt.sh`:\n\n```bash\nITERATION_DELAY_SECONDS=5 ./bin/slt.sh\n```\n\nThis will introduce a 5-second pause after all scenarios within a single iteration have completed, before the next iteration begins.\n\n### Backward Compatibility\n\n**✅ 100% compatible with v6.2 usage:**\n- All v6.2 commands work identically\n- Baseline format unchanged\n- Report structure preserved\n- CLI interface identical\n- Only enhancement: Addition of iteration delay for SLT.\n\n**Migration:** Simply use `v6.3` - no configuration changes needed!\n\n---\n\n## Core Engines\n\n### Resilio SLT (Simple Load Testing) - `bin/slt.sh` v2.3 (Suite v6.3)\n\nThe **SLT engine** is optimized for agile development cycles and rapid feedback. Perfect for:\n\n- Quick performance checks during development\n- Smoke testing before deployments\n- CI/CD pipeline integration\n- Endpoint comparison and basic benchmarking\n\n**Key Features:**\n- Configurable iterations (default: 1000)\n- Concurrent user simulation (default: 10)\n- Percentile analysis (P50, P95, P99)\n- Stability measurement (Coefficient of Variation)\n- Error tracking without breaking calculations\n- Comprehensive summary reports in Markdown\n\n---\n\n### Resilio DLT (Deep Load Testing) - `bin/dlt.sh` v6.3\n\nThe **DLT engine** is a research-grade powerhouse designed for rigorous statistical analysis. Perfect for:\n\n- Production baseline establishment\n- Statistical hypothesis testing with automatic test selection\n- Regression detection with effect size analysis\n- Capacity planning and SLA validation\n- Performance trending over releases\n- Tail latency analysis (P95/P99)\n\n**Key Features:**\n\n#### Statistical Testing (v6.2)\n-   **Python-powered backend** - Extremely fast calculations for any data volume.\n-   **Automatic test selection** - Chooses best method for your data.\n-   **Mann-Whitney U test** - Robust for non-normal distributions ($O(n \\log n)$).\n-   **Welch's t-test** - Powerful for normal distributions.\n-   **Normality checking** - Skewness and kurtosis analysis.\n-   **Effect size calculation** - Cohen's d and rank-biserial correlation.\n-   **95% confidence intervals** - Statistical accuracy bounds.\n\n#### Test Execution\n- Three-phase execution (Warm-up → Ramp-up → Sustained)\n- Realistic workload simulation (2-second think time)\n- System resource monitoring (CPU, memory, disk I/O)\n- Automated regression detection\n\n#### Baseline Management\n- Git-integrated baseline management\n- Production vs development modes\n- Metadata tracking with Git commits\n- Automatic baseline comparison\n\n---\n\n## When to Use Each Engine\n\n| Scenario | Use SLT | Use DLT |\n|----------|---------|---------|\n| Quick performance check | ✅ | ❌ |\n| CI/CD integration | ✅ | ⚠️ (time-consuming) |\n| Compare endpoints | ✅ | ❌ |\n| Initial benchmarking | ✅ | ❌ |\n| Production baseline | ❌ | ✅ |\n| Statistical validation | ❌ | ✅ |\n| **Tail latency testing (P95/P99)** | ❌ | ✅ **(v6.2 excels!)** |\n| Regression detection | ❌ | ✅ |\n| Capacity planning | ❌ | ✅ |\n| SLA validation | ❌ | ✅ |\n| Memory leak detection | ❌ | ✅ |\n\n---\n\n## Technology Compatibility\n\nResilio works with **any web technology** because it tests via HTTP protocol:\n\n| Technology | Framework Examples | Status |\n|------------|-------------------|--------|\n| **PHP** | Laravel, Symfony, WordPress, Slim | ✅ Fully Supported |\n| **JavaScript** | Node.js, Express, Next.js, Nest.js | ✅ Fully Supported |\n| **Python** | Django, Flask, FastAPI, Pyramid | ✅ Fully Supported |\n| **Go** | Gin, Echo, Fiber, Chi | ✅ Fully Supported |\n| **Ruby** | Rails, Sinatra, Hanami | ✅ Fully Supported |\n| **Java** | Spring Boot, Micronaut, Quarkus | ✅ Fully Supported |\n| **.NET** | ASP.NET Core, Nancy | ✅ Fully Supported |\n| **Rust** | Actix-web, Rocket, Axum | ✅ Fully Supported |\n\n**Why it works:** Resilio operates at the HTTP protocol layer, measuring request/response cycles exactly as end-users experience them—regardless of backend implementation.\n\n---\n\n## Quick Start\n\n### Prerequisites\n\n- **Python 3.10+** (Mandatory for DLT math engine)\n- **ApacheBench (ab)** (Standard `apache2-utils`)\n- **Bash 4.4+**\n- **bc** (Arbitrary precision calculator)\n- **GNU Coreutils** (`awk`, `grep`, `sed`, `sort`, `uniq`)\n- **Git** (For baseline version control)\n- **curl** (For system metric validation)\n- iostat (for system monitoring)\n\n**Installation:**\n\n```bash\n# Ubuntu/Debian\nsudo apt-get update\nsudo apt-get install apache2-utils bc gawk grep coreutils sysstat\n\n# CentOS/RHEL/Fedora\nsudo yum install httpd-tools bc gawk grep coreutils sysstat\n\n# macOS\nbrew install apache2\n# bc, awk, grep are pre-installed\n```\n\n**Verify Installation:**\n\n```bash\nab -V \u0026\u0026 bc --version \u0026\u0026 awk --version \u0026\u0026 grep --version\n```\n\n### Installation\n\n```bash\n# 1. Clone or download the repository\ngit clone https://github.com/cakmoel/resilio.git\ncd resilio\n\n# 2. Make scripts executable\nchmod +x bin/slt.sh bin/dlt.sh\n\n# 3. Configure test scenarios (edit the SCENARIOS section)\nnano bin/dlt.sh  # or bin/slt.sh\n```\n\n### Basic Usage\n\n**Simple Load Testing (SLT):**\n\n```bash\n# Default: 1000 iterations, 100 requests/test, 10 concurrent users\n./bin/slt.sh\n\n# Custom parameters\nITERATIONS=500 AB_REQUESTS=50 AB_CONCURRENCY=5 ./bin/slt.sh\n\n# With iteration delay\nITERATION_DELAY_SECONDS=5 ITERATIONS=100 AB_REQUESTS=10 AB_CONCURRENCY=2 ./bin/slt.sh\n```\n\n**Deep Load Testing (DLT):**\n\n```bash\n# Research-based three-phase test with automatic statistical test selection\n./bin/dlt.sh\n\n# Results include hypothesis testing against baseline\ncat load_test_reports_*/hypothesis_testing_*.md\n```\n\n---\n\n## Performance Methodology\n\nResilio is not a basic wrapper for ApacheBench—it's a framework implementing rigorous statistical controls to ensure performance data is actionable and scientifically sound.\n\n### 1. Tail Latency Analysis (P95/P99)\n\nAverage response times mask the \"long tail\" of user dissatisfaction. Resilio focuses on **P95 and P99 latencies** to identify worst-case scenarios caused by:\n- Resource contention\n- Garbage collection pauses\n- Network jitter\n- Database query variance\n\n**New in v6.2:** Mann-Whitney U test is specifically designed for tail latency metrics, providing more accurate detection of regressions in P95/P99 values.\n\n### 2. Stability Measurement (Coefficient of Variation)\n\nThe **CV metric** reveals system consistency:\n- **CV \u003c 10%**: Excellent stability\n- **CV \u003c 20%**: Good stability\n- **CV \u003c 30%**: Moderate stability\n- **CV ≥ 30%**: Poor stability (investigate)\n\nA low average RPS is acceptable if CV is low (consistency), but high RPS with high CV indicates instability.\n\n### 3. Three-Phase Execution (DLT Only)\n\nAdheres to the **USE Method** (Utilization, Saturation, Errors):\n\n1. **Warm-up Phase** (50 iterations): Primes JIT compilers, connection pools, and caches\n2. **Ramp-up Phase** (100 iterations): Gradually increases load to observe the \"Knee of the Curve\"\n3. **Sustained Load** (850 iterations): Collects primary dataset for statistical analysis\n\n### 4. Statistical Hypothesis Testing (DLT Only)\n\n**New in v6.2:** Automatic test selection between two methods:\n\n#### Welch's t-test (Parametric)\n**Used when:** Data is approximately normal (|skewness| \u003c 1.0 AND |kurtosis| \u003c 2.0)\n\n**Best for:**\n- Mean RPS (requests per second)\n- Average response time\n- Throughput metrics\n\n**Advantages:** More statistical power (better at detecting true differences)\n\n#### Mann-Whitney U Test (Non-Parametric) - NEW!\n**Used when:** Data is non-normal (|skewness| ≥ 1.0 OR |kurtosis| ≥ 2.0)\n\n**Best for:**\n- P95/P99 latencies (long tails)\n- Error rates (heavily skewed)\n- Cache hit rates (bimodal)\n\n**Advantages:** Robust to outliers, no distribution assumptions\n\n#### Hypothesis Testing Framework\n\n- **Null Hypothesis (H₀)**: No significant difference exists\n- **Alternative Hypothesis (H₁)**: Significant difference detected\n- **Significance Level**: α = 0.05 (95% confidence)\n\n**Effect Size:**\n- **Cohen's d** (for Welch's t-test): Standardized mean difference\n- **Rank-biserial r** (for Mann-Whitney U): Analogous to Cohen's d\n\n**Interpretation (both metrics):**\n- \u003c 0.2: Negligible\n- 0.2 - 0.5: Small\n- 0.5 - 0.8: Medium\n- \\\u003e 0.8: Large\n\nThis ensures decisions are based on **both statistical significance and practical importance**.\n\n### 5. 95% Confidence Intervals\n\nAll Mean RPS values include confidence intervals, ensuring results represent true system capacity—not lucky runs.\n\n---\n\n## Understanding Results\n\n### SLT Output Structure\n\n```\nload_test_results_YYYYMMDD_HHMMSS/\n├── summary_report.md          # Main performance report\n├── console_output.log         # Real-time test output\n├── execution.log              # Detailed execution log\n├── error.log                  # Error tracking\n└── raw_*.txt                  # Raw ApacheBench outputs\n```\n\n**Key Metrics:**\n- **Average RPS**: Mean throughput\n- **Median RPS**: Less affected by outliers\n- **Standard Deviation**: Consistency indicator\n- **P50/P95/P99**: Percentile response times\n- **CV (Coefficient of Variation)**: Stability score\n- **Success/Error Rate**: Reliability metrics\n\n---\n\n### DLT Output Structure\n\n```\nload_test_reports_YYYYMMDD_HHMMSS/\n├── research_report_*.md         # Comprehensive analysis\n├── hypothesis_testing_*.md      # Statistical comparison (enhanced in v6.1)\n├── system_metrics.csv           # CPU, memory, disk I/O\n├── error_log.txt                # Error tracking\n├── execution.log                # Phase-by-phase log\n├── raw_data/                    # All ApacheBench outputs\n└── charts/                      # Reserved for visualizations\n```\n\n**Key Metrics:**\n- **Mean with 95% CI**: Statistical accuracy bounds\n- **Statistical Test Used**: Shows which test was automatically selected (v6.2)\n- **Test Statistic**: t-value (Welch's) or U-value (Mann-Whitney)\n- **p-value**: Statistical significance\n- **Effect Size**: Cohen's d or rank-biserial r\n- **Verdict**: Regression/Improvement/No Change\n- **Distribution Characteristics**: Skewness and kurtosis (v6.2)\n\n---\n\n### Example: Enhanced v6.2 Report\n\n```markdown\n### API_Endpoint\n\n**Test Used**: Mann-Whitney U test (non-parametric)\n**Reason**: Non-normal distribution detected\n\n| Metric | Value | Interpretation |\n|--------|-------|----------------|\n| **Test Statistic** | 1247 | U-value |\n| **p-value** | 0.032 | Statistically significant ★ |\n| **Effect Size** | -0.34 | Rank-biserial r |\n| **Effect Magnitude** | small | - |\n| **Verdict** | ⚠️ SIGNIFICANT REGRESSION | - |\n\n#### Distribution Characteristics\n\n- **Baseline**: non_normal|skew=2.34|kurt=8.91\n- **Candidate**: non_normal|skew=1.87|kurt=6.23\n\nMann-Whitney U test was used because at least one sample showed \nnon-normal distribution. This test is more robust to outliers and \nskewed data, making it ideal for tail latency metrics (P95/P99).\n\n- **Strong evidence** against H₀ (95% confidence)\n- Effect size is **small** (Rank-biserial r = -0.34)\n- **Practical significance**: Change is statistically detectable but may not be practically important\n```\n\n---\n\n## Configuration\n\n### Configuring Test Scenarios\n\nBoth scripts use a `SCENARIOS` associative array:\n\n```bash\n# Edit bin/slt.sh or bin/dlt.sh\ndeclare -A SCENARIOS=(\n    [\"Homepage\"]=\"http://localhost:8000/\"\n    [\"API_Users\"]=\"http://localhost:8000/api/users\"\n    [\"Product_Page\"]=\"http://localhost:8000/products/123\"\n)\n```\n\n### Environment Variables (SLT)\n\n```bash\nITERATIONS=1000          # Number of test iterations\nAB_REQUESTS=100          # Requests per test\nAB_CONCURRENCY=10        # Concurrent users\nAB_TIMEOUT=30            # Timeout in seconds\n```\n\n**Example:**\n\n```bash\nITERATIONS=500 AB_CONCURRENCY=20 ./bin/slt.sh\n```\n\n### Environment Configuration (DLT)\n\n**Production Mode** (Git-tracked baselines):\n\n```bash\n# Create .env file\necho \"APP_ENV=production\" \u003e .env\n\n# Configure URLs\necho 'STATIC_PAGE=https://prod.example.com/' \u003e\u003e .env\necho 'DYNAMIC_PAGE=https://prod.example.com/api/users' \u003e\u003e .env\n\n./bin/dlt.sh\n```\n\nBaselines saved to: `./baselines/` (Git-tracked)\n\n**Local Development Mode** (local-only baselines):\n\n```bash\necho \"APP_ENV=local\" \u003e .env\n./bin/dlt.sh\n```\n\nBaselines saved to: `./.dlt_local/` (not Git-tracked)\n\n---\n\n## CI/CD Integration\n\n### GitHub Actions Example\n\n```yaml\nname: Performance Regression Check\n\non:\n  pull_request:\n    branches: [main]\n\njobs:\n  load-test:\n    runs-on: ubuntu-latest\n    \n    steps:\n      - uses: actions/checkout@v3\n        with:\n          fetch-depth: 0  # Need baselines from history\n      \n      - name: Install Dependencies\n        run: |\n          sudo apt-get update\n          sudo apt-get install -y apache2-utils bc sysstat\n      \n      - name: Run Load Test (v6.3 with automatic test selection)\n        run: |\n          chmod +x bin/dlt.sh\n          ./bin/dlt.sh\n      \n      - name: Check for Regressions\n        run: |\n          REPORT=$(cat load_test_reports_*/hypothesis_testing_*.md)\n          \n          # Check for significant regressions\n          if echo \"$REPORT\" | grep -q \"SIGNIFICANT REGRESSION\"; then\n            echo \"⚠️ Performance regression detected!\"\n            echo \"$REPORT\"\n            exit 1\n          fi\n          \n          # v6.2: Also check which test was used\n          echo \"Statistical Test Summary:\"\n          echo \"$REPORT\" | grep \"Test Used:\"\n      \n      - name: Upload Reports\n        if: always()\n        uses: actions/upload-artifact@v3\n        with:\n          name: performance-reports\n          path: load_test_reports_*/**\n```\n\n---\n\n## Best Practices\n\n### Before Testing\n\n1. **Never test production** without authorization\n2. **Warm up your application** before recording metrics\n3. **Check resource limits**: `ulimit -n 10000`\n4. **Disable rate limiting** temporarily during tests\n5. **Monitor application logs** during test execution\n\n### Interpreting Results (Updated for v6.2)\n\n1. **Focus on percentiles**: P95/P99 matter more than averages\n2. **Check CV first**: High CV = unstable system\n3. **Compare against baselines**: Use DLT for trend analysis\n4. **Consider both p-value AND effect size**: Statistical significance ≠ practical importance\n5. **Review test selection** (v6.1): Check if Mann-Whitney U was used for tail latencies\n6. **Inspect distribution characteristics** (v6.1): High skewness/kurtosis indicates need for non-parametric tests\n7. **Document test conditions**: Note system state, data volume, background jobs\n\n### When to Trust Mann-Whitney U Results (v6.2)\n\nMann-Whitney U test is **more reliable** than Welch's t-test when:\n- Testing P95/P99 latencies (almost always non-normal)\n- Data has outliers (e.g., occasional 5-second response times)\n- Error rates (many zeros, few spikes)\n- Cache performance (bimodal distribution: hit vs miss)\n\n**Check your report:** Look for `\"Test Used: Mann-Whitney U test\"` in the hypothesis testing report.\n\n### Production Baseline Management\n\n```bash\n# 1. Establish baseline during stable period\necho \"APP_ENV=production\" \u003e .env\n./bin/dlt.sh\n\n# 2. Commit baselines to Git\ngit add baselines/\ngit commit -m \"chore: establish performance baseline for release v2.0\"\ngit push\n\n# 3. Future tests automatically compare against this baseline\n./bin/dlt.sh\n# v6.3 automatically selects best statistical test!\n\n# 4. Check results\ncat load_test_reports_*/hypothesis_testing_*.md\n```\n\n---\n\n## Troubleshooting\n\n### Common Issues\n\n**1. \"bc incompatible with current locale\"**\n\n```bash\n# Solution A: Use C locale\nLC_NUMERIC=C ./bin/dlt.sh\n\n# Solution B: Install en_US.UTF-8\nsudo locale-gen en_US.UTF-8\n```\n\n**2. Connection Refused**\n\n```bash\n# Verify application is running\ncurl http://localhost:8000/\n\n# Check firewall\nsudo ufw status\n```\n\n**3. Timeout Errors**\n\n```bash\n# Increase timeout or reduce concurrency\nAB_TIMEOUT=60 AB_CONCURRENCY=5 ./bin/slt.sh\n```\n\n**4. Too Many Open Files**\n\n```bash\n# Increase file descriptor limit\nulimit -n 10000\n```\n\n**5. Unexpected Test Selection (v6.1)**\n\n```bash\n# If Mann-Whitney U is used when you expect Welch's t-test:\n# Check the distribution characteristics in the report\n\n# Example:\n# Distribution: non_normal|skew=2.34|kurt=8.91\n#               ^^^^^^^^^^\n# High skewness (2.34 \u003e 1.0) triggered Mann-Whitney U\n\n# This is CORRECT behavior - your data is skewed!\n```\n\n---\n\n---\n\n## Upgrading from v6.2 to v6.3\n\n### Migration Guide\n\n-  **Zero-Risk Upgrade - 100% Backward Compatible**\n\n```bash\n# 1. Backup v6.2 (optional - recommended)\ncp bin/slt.sh bin/slt_v6.2_backup.sh\n\n# 2. Replace with v6.3\n# Download new slt.sh from repository\nchmod +x bin/slt.sh\n\n# 3. Test (works identically to v6.2)\n./bin/slt.sh\n\n# 4. Try new iteration delay feature\nITERATION_DELAY_SECONDS=5 ./bin/slt.sh\n```\n\n### What Changed\n\n**Same (100% compatible):**\n-  All CLI commands for both SLT and DLT\n-  Baseline file format\n-  Environment variables  \n-  Report locations\n-  All v6.2 functionality\n\n**Enhanced (SLT only):**\n-  Iteration delay support for rate limiting\n-  Configurable pacing between test cycles\n-  Better control for system under test stability\n-  Improved simulation of realistic user behavior\n\n**No configuration changes needed!**\n\n---\n\n## Documentation\n\n- **[USAGE_GUIDE.md](docs/USAGE_GUIDE.md)** - Comprehensive usage guide with real-world scenarios\n- **[REFERENCES.md](docs/REFERENCES.md)** - Academic and research references (updated for v6.2)\n- **[CHANGELOG.md](CHANGELOG.md)** - Version history and release notes\n- **[Performance Methodology](docs/methodology.md)** - Mathematical formulas and ISO 25010 compliance\n\n---\n\n## Research Foundations\n\nResilio v6.3 implements methodologies from:\n\n### Original Foundations (v6.0 \u0026 v6.1)\n- **Jain, R. (1991)** - Statistical methods for performance measurement\n- **Welch, B. L. (1947)** - Unequal variance t-test\n- **Cohen, J. (1988)** - Effect size interpretation\n- **ISO/IEC 25010:2011** - Performance efficiency metrics\n- **Barford \u0026 Crovella (1998)** - Workload characterization\n- **Gunther, N. J. (2007)** - Queueing theory and capacity planning\n- **Mann, H. B., \u0026 Whitney, D. R. (1947)** - Non-parametric rank-based comparison\n- **Wilcoxon, F. (1945)** - Rank-sum test theoretical foundation\n- **D'Agostino, R. B. (1971)** - Normality testing via skewness and kurtosis\n- **Kerby, D. S. (2014)** - Rank-biserial correlation for effect size\n\n### New in v6.2\n- **Ruxton, G. D. (2006)** - The unequal variance t-test is an underused substitution for Student's t-test and the Mann-Whitney U test.\n\n---\n\n## Version Comparison\n\n| Feature | v6.0 | v6.1 | v6.2 | v6.3 |\n|---------|------|------|------|------|\n| Welch's t-test | ✅ | ✅ | ✅ | ✅ |\n| Mann-Whitney U | ❌ | ✅ | ✅ | ✅ |\n| Automatic test selection | ❌ | ✅ | ✅ | ✅ |\n| Normality checking | ❌ | ✅ | ✅ | ✅ |\n| Cohen's d | ✅ | ✅ | ✅ | ✅ |\n| Rank-biserial r | ❌ | ✅ | ✅ | ✅ |\n| Baseline management | ✅ | ✅ | ✅ | ✅ |\n| Smart locale detection | ✅ | ✅ | ✅ | ✅ |\n| Python Math Engine (40x) | ❌ | ❌ | ✅ | ✅ |\n| Iteration Delay (Rate Limiting) | ❌ | ❌ | ❌ | ✅ |\n| Best for tail latencies | ⚠️ | ✅ | ✅ | ✅ |\n| Handles outliers | ⚠️ | ✅ | ✅ | ✅ |\n\n---\n\n## Contributing\n\nContributions are welcome! Please:\n\n1. Fork the repository\n2. Create a feature branch\n3. Include tests for new functionality\n4. Update documentation (including REFERENCES.md for new methods)\n5. Submit a pull request\n\n### Areas for Contribution\n\n- Multiple comparison correction (Bonferroni/Holm)\n- Sequential Probability Ratio Test (SPRT) for early stopping\n- Bayesian A/B testing as an alternative approach\n- Visualization dashboards for trends\n- Integration with monitoring tools (Prometheus, Grafana)\n\n---\n\n## License\n\nThis project is licensed under the MIT License.\n\nCopyright © 2025 M.Noermoehammad\n\n---\n\n## Support\n\n- **Issues**: [GitHub Issues](https://github.com/cakmoel/resilio/issues)\n- **Discussions**: [GitHub Discussions](https://github.com/cakmoel/resilio/discussions)\n- **Email**: alanmoehammad@gmail.com\n\n---\n\n## Citation\n\nIf you use Resilio in academic research, please cite:\n\n```bibtex\n@software{resilio2026,\n  author = {Noermoehammad, M.},\n  title = {Resilio: Research-Based Performance Testing Suite},\n  year = {2026},\n  version = {6.3.0},\n  url = {https://github.com/cakmoel/resilio}\n}\n```\n\n---\n\n**Resilio v6.3: Built for Speed, Tested for Durability, Proven by Science**\n\n*Now with iteration delay control for realistic load testing.*","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcakmoel%2Fresilio","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcakmoel%2Fresilio","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcakmoel%2Fresilio/lists"}