{"id":46347192,"url":"https://github.com/ruvnet/RuView","last_synced_at":"2026-03-05T08:00:51.825Z","repository":{"id":297731741,"uuid":"997737944","full_name":"ruvnet/RuView","owner":"ruvnet","description":"π RuView: WiFi DensePose turns commodity WiFi signals into real-time human pose estimation, vital sign monitoring, and presence detection — all without a single pixel of video. ","archived":false,"fork":false,"pushed_at":"2026-03-03T20:21:52.000Z","size":108962,"stargazers_count":25039,"open_issues_count":25,"forks_count":3088,"subscribers_count":140,"default_branch":"main","last_synced_at":"2026-03-03T20:47:22.478Z","etag":null,"topics":["densepose","densepose-controlnet","wifi"],"latest_commit_sha":null,"homepage":"https://github.com/ruvnet/ruvector/","language":"Rust","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ruvnet.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"docs/security-audit-wasm-edge-vendor.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-06-07T04:32:30.000Z","updated_at":"2026-03-03T20:45:57.000Z","dependencies_parsed_at":"2025-06-07T05:36:11.781Z","dependency_job_id":null,"html_url":"https://github.com/ruvnet/RuView","commit_stats":null,"previous_names":["ruvnet/wifi-densepose","ruvnet/ruview"],"tags_count":3,"template":false,"template_full_name":null,"purl":"pkg:github/ruvnet/RuView","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ruvnet%2FRuView","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ruvnet%2FRuView/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ruvnet%2FRuView/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ruvnet%2FRuView/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ruvnet","download_url":"https://codeload.github.com/ruvnet/RuView/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ruvnet%2FRuView/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30115662,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-05T03:40:26.266Z","status":"ssl_error","status_checked_at":"2026-03-05T03:39:15.902Z","response_time":93,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["densepose","densepose-controlnet","wifi"],"created_at":"2026-03-04T22:00:32.315Z","updated_at":"2026-03-05T08:00:51.805Z","avatar_url":"https://github.com/ruvnet.png","language":"Rust","readme":"# π RuView\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"assets/ruview-small-gemini.jpg\" alt=\"RuView - WiFi DensePose\" width=\"100%\"\u003e\n\u003c/p\u003e\n\n**See through walls with WiFi.** No cameras. No wearables. No Internet. Just radio waves.\n\nWiFi DensePose turns commodity WiFi signals into real-time human pose estimation, vital sign monitoring, and presence detection — all without a single pixel of video. \n\nBy analyzing Channel State Information (CSI) disturbances caused by human movement, the system reconstructs body position, breathing rate, and heartbeat using physics-based signal processing and machine learning. \n\n[Edge modules](#edge-intelligence-adr-041) are small programs that run directly on the ESP32 sensor — no internet needed, no cloud fees, instant response.\n\n[![Rust 1.85+](https://img.shields.io/badge/rust-1.85+-orange.svg)](https://www.rust-lang.org/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Tests: 1300+](https://img.shields.io/badge/tests-1300%2B-brightgreen.svg)](https://github.com/ruvnet/RuView)\n[![Docker: multi-arch](https://img.shields.io/badge/docker-amd64%20%2B%20arm64-blue.svg)](https://hub.docker.com/r/ruvnet/wifi-densepose)\n[![Vital Signs](https://img.shields.io/badge/vital%20signs-breathing%20%2B%20heartbeat-red.svg)](#vital-sign-detection)\n[![ESP32 Ready](https://img.shields.io/badge/ESP32--S3-CSI%20streaming-purple.svg)](#esp32-s3-hardware-pipeline)\n[![crates.io](https://img.shields.io/crates/v/wifi-densepose-ruvector.svg)](https://crates.io/crates/wifi-densepose-ruvector)\n\n \n\u003e | What | How | Speed |\n\u003e |------|-----|-------|\n\u003e | **Pose estimation** | CSI subcarrier amplitude/phase → DensePose UV maps | 54K fps (Rust) |\n\u003e | **Breathing detection** | Bandpass 0.1-0.5 Hz → FFT peak | 6-30 BPM |\n\u003e | **Heart rate** | Bandpass 0.8-2.0 Hz → FFT peak | 40-120 BPM |\n\u003e | **Presence sensing** | RSSI variance + motion band power | \u003c 1ms latency |\n\u003e | **Through-wall** | Fresnel zone geometry + multipath modeling | Up to 5m depth |\n\n```bash\n# 30 seconds to live sensing — no toolchain required\ndocker pull ruvnet/wifi-densepose:latest\ndocker run -p 3000:3000 ruvnet/wifi-densepose:latest\n# Open http://localhost:3000\n```\n\n\u003e [!NOTE]\n\u003e **CSI-capable hardware required.** Pose estimation, vital signs, and through-wall sensing rely on Channel State Information (CSI) — per-subcarrier amplitude and phase data that standard consumer WiFi does not expose. You need CSI-capable hardware (ESP32-S3 or a research NIC) for full functionality. Consumer WiFi laptops can only provide RSSI-based presence detection, which is significantly less capable.\n\n\u003e **Hardware options** for live CSI capture:\n\u003e\n\u003e | Option | Hardware | Cost | Full CSI | Capabilities |\n\u003e |--------|----------|------|----------|-------------|\n\u003e | **ESP32 Mesh** (recommended) | 3-6x ESP32-S3 + WiFi router | ~$54 | Yes | Pose, breathing, heartbeat, motion, presence |\n\u003e | **Research NIC** | Intel 5300 / Atheros AR9580 | ~$50-100 | Yes | Full CSI with 3x3 MIMO |\n\u003e | **Any WiFi** | Windows, macOS, or Linux laptop | $0 | No | RSSI-only: coarse presence and motion |\n\u003e\n\u003e No hardware? Verify the signal processing pipeline with the deterministic reference signal: `python v1/data/proof/verify.py`\n\u003e\n---\n\n## 📖 Documentation\n\n| Document | Description |\n|----------|-------------|\n| [User Guide](docs/user-guide.md) | Step-by-step guide: installation, first run, API usage, hardware setup, training |\n| [Build Guide](docs/build-guide.md) | Building from source (Rust and Python) |\n| [Architecture Decisions](docs/adr/README.md) | 44 ADRs — why each technical choice was made, organized by domain (hardware, signal processing, ML, platform, infrastructure) |\n| [Domain Models](docs/ddd/README.md) | 7 DDD models (RuvSense, Signal Processing, Training Pipeline, Hardware Platform, Sensing Server, WiFi-Mat, CHCI) — bounded contexts, aggregates, domain events, and ubiquitous language |\n\n---\n\n\n  \u003cimg src=\"assets/screen.png\" alt=\"WiFi DensePose — Live pose detection with setup guide\" width=\"800\"\u003e\n  \u003cbr\u003e\n  \u003cem\u003eReal-time pose skeleton from WiFi CSI signals — no cameras, no wearables\u003c/em\u003e\n\n\u003e The [server](#-quick-start) is optional for visualization and aggregation — the ESP32 [runs independently](#esp32-s3-hardware-pipeline) for presence detection, vital signs, and fall alerts.\n\n\n## 🚀 Key Features\n\n### Sensing\n\nSee people, breathing, and heartbeats through walls — using only WiFi signals already in the room.\n\n| | Feature | What It Means |\n|---|---------|---------------|\n| 🔒 | **Privacy-First** | Tracks human pose using only WiFi signals — no cameras, no video, no images stored |\n| 💓 | **Vital Signs** | Detects breathing rate (6-30 breaths/min) and heart rate (40-120 bpm) without any wearable |\n| 👥 | **Multi-Person** | Tracks multiple people simultaneously, each with independent pose and vitals — no hard software limit (physics: ~3-5 per AP with 56 subcarriers, more with multi-AP) |\n| 🧱 | **Through-Wall** | WiFi passes through walls, furniture, and debris — works where cameras cannot |\n| 🚑 | **Disaster Response** | Detects trapped survivors through rubble and classifies injury severity (START triage) |\n| 📡 | **Multistatic Mesh** | 4-6 low-cost sensor nodes work together, combining 12+ overlapping signal paths for full 360-degree room coverage with sub-inch accuracy and no person mix-ups ([ADR-029](docs/adr/ADR-029-ruvsense-multistatic-sensing-mode.md)) |\n| 🌐 | **Persistent Field Model** | The system learns the RF signature of each room — then subtracts the room to isolate human motion, detect drift over days, predict intent before movement starts, and flag spoofing attempts ([ADR-030](docs/adr/ADR-030-ruvsense-persistent-field-model.md)) |\n\n### Intelligence\n\nThe system learns on its own and gets smarter over time — no hand-tuning, no labeled data required.\n\n| | Feature | What It Means |\n|---|---------|---------------|\n| 🧠 | **Self-Learning** | Teaches itself from raw WiFi data — no labeled training sets, no cameras needed to bootstrap ([ADR-024](docs/adr/ADR-024-contrastive-csi-embedding-model.md)) |\n| 🎯 | **AI Signal Processing** | Attention networks, graph algorithms, and smart compression replace hand-tuned thresholds — adapts to each room automatically ([RuVector](https://github.com/ruvnet/ruvector)) |\n| 🌍 | **Works Everywhere** | Train once, deploy in any room — adversarial domain generalization strips environment bias so models transfer across rooms, buildings, and hardware ([ADR-027](docs/adr/ADR-027-cross-environment-domain-generalization.md)) |\n| 👁️ | **Cross-Viewpoint Fusion** | AI combines what each sensor sees from its own angle — fills in blind spots and depth ambiguity that no single viewpoint can resolve on its own ([ADR-031](docs/adr/ADR-031-ruview-sensing-first-rf-mode.md)) |\n| 🔮 | **Signal-Line Protocol** | A 6-stage processing pipeline transforms raw WiFi signals into structured body representations — from signal cleanup through graph-based spatial reasoning to final pose output ([ADR-033](docs/adr/ADR-033-crv-signal-line-sensing-integration.md)) |\n| 🔒 | **QUIC Mesh Security** | All sensor-to-sensor communication is encrypted end-to-end with tamper detection, replay protection, and seamless reconnection if a node moves or drops offline ([ADR-032](docs/adr/ADR-032-multistatic-mesh-security-hardening.md)) |\n\n### Performance \u0026 Deployment\n\nFast enough for real-time use, small enough for edge devices, simple enough for one-command setup.\n\n| | Feature | What It Means |\n|---|---------|---------------|\n| ⚡ | **Real-Time** | Analyzes WiFi signals in under 100 microseconds per frame — fast enough for live monitoring |\n| 🦀 | **810x Faster** | Complete Rust rewrite: 54,000 frames/sec pipeline, multi-arch Docker image, 1,031+ tests |\n| 🐳 | **One-Command Setup** | `docker pull ruvnet/wifi-densepose:latest` — live sensing in 30 seconds, no toolchain needed (amd64 + arm64 / Apple Silicon) |\n| 📡 | **Fully Local** | Runs completely on a $9 ESP32 — no internet connection, no cloud account, no recurring fees. Detects presence, vital signs, and falls on-device with instant response |\n| 📦 | **Portable Models** | Trained models package into a single `.rvf` file — runs on edge, cloud, or browser (WASM) |\n\n---\n\n## 🔬 How It Works\n\nWiFi routers flood every room with radio waves. When a person moves — or even breathes — those waves scatter differently. WiFi DensePose reads that scattering pattern and reconstructs what happened:\n\n```\nWiFi Router → radio waves pass through room → hit human body → scatter\n    ↓\nESP32 mesh (4-6 nodes) captures CSI on channels 1/6/11 via TDM protocol\n    ↓\nMulti-Band Fusion: 3 channels × 56 subcarriers = 168 virtual subcarriers per link\n    ↓\nMultistatic Fusion: N×(N-1) links → attention-weighted cross-viewpoint embedding\n    ↓\nCoherence Gate: accept/reject measurements → stable for days without tuning\n    ↓\nSignal Processing: Hampel, SpotFi, Fresnel, BVP, spectrogram → clean features\n    ↓\nAI Backbone (RuVector): attention, graph algorithms, compression, field model\n    ↓\nSignal-Line Protocol (CRV): 6-stage gestalt → sensory → topology → coherence → search → model\n    ↓\nNeural Network: processed signals → 17 body keypoints + vital signs + room model\n    ↓\nOutput: real-time pose, breathing, heart rate, room fingerprint, drift alerts\n```\n\nNo training cameras required — the [Self-Learning system (ADR-024)](docs/adr/ADR-024-contrastive-csi-embedding-model.md) bootstraps from raw WiFi data alone. [MERIDIAN (ADR-027)](docs/adr/ADR-027-cross-environment-domain-generalization.md) ensures the model works in any room, not just the one it trained in.\n\n---\n\n## 🏢 Use Cases \u0026 Applications\n\nWiFi sensing works anywhere WiFi exists. No new hardware in most cases — just software on existing access points or a $8 ESP32 add-on. Because there are no cameras, deployments avoid privacy regulations (GDPR video, HIPAA imaging) by design.\n\n**Scaling:** Each AP distinguishes ~3-5 people (56 subcarriers). Multi-AP multiplies linearly — a 4-AP retail mesh covers ~15-20 occupants. No hard software limit; the practical ceiling is signal physics.\n\n| | Why WiFi sensing wins | Traditional alternative |\n|---|----------------------|----------------------|\n| 🔒 | **No video, no GDPR/HIPAA imaging rules** | Cameras require consent, signage, data retention policies |\n| 🧱 | **Works through walls, shelving, debris** | Cameras need line-of-sight per room |\n| 🌙 | **Works in total darkness** | Cameras need IR or visible light |\n| 💰 | **$0-$8 per zone** (existing WiFi or ESP32) | Camera systems: $200-$2,000 per zone |\n| 🔌 | **WiFi already deployed everywhere** | PIR/radar sensors require new wiring per room |\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003e🏥 Everyday\u003c/strong\u003e — Healthcare, retail, office, hospitality (commodity WiFi)\u003c/summary\u003e\n\n| Use Case | What It Does | Hardware | Key Metric | Edge Module |\n|----------|-------------|----------|------------|-------------|\n| **Elderly care / assisted living** | Fall detection, nighttime activity monitoring, breathing rate during sleep — no wearable compliance needed | 1 ESP32-S3 per room ($8) | Fall alert \u003c2s | [Sleep Apnea](docs/edge-modules/medical.md), [Gait Analysis](docs/edge-modules/medical.md) |\n| **Hospital patient monitoring** | Continuous breathing + heart rate for non-critical beds without wired sensors; nurse alert on anomaly | 1-2 APs per ward | Breathing: 6-30 BPM | [Respiratory Distress](docs/edge-modules/medical.md), [Cardiac Arrhythmia](docs/edge-modules/medical.md) |\n| **Emergency room triage** | Automated occupancy count + wait-time estimation; detect patient distress (abnormal breathing) in waiting areas | Existing hospital WiFi | Occupancy accuracy \u003e95% | [Queue Length](docs/edge-modules/retail.md), [Panic Motion](docs/edge-modules/security.md) |\n| **Retail occupancy \u0026 flow** | Real-time foot traffic, dwell time by zone, queue length — no cameras, no opt-in, GDPR-friendly | Existing store WiFi + 1 ESP32 | Dwell resolution ~1m | [Customer Flow](docs/edge-modules/retail.md), [Dwell Heatmap](docs/edge-modules/retail.md) |\n| **Office space utilization** | Which desks/rooms are actually occupied, meeting room no-shows, HVAC optimization based on real presence | Existing enterprise WiFi | Presence latency \u003c1s | [Meeting Room](docs/edge-modules/building.md), [HVAC Presence](docs/edge-modules/building.md) |\n| **Hotel \u0026 hospitality** | Room occupancy without door sensors, minibar/bathroom usage patterns, energy savings on empty rooms | Existing hotel WiFi | 15-30% HVAC savings | [Energy Audit](docs/edge-modules/building.md), [Lighting Zones](docs/edge-modules/building.md) |\n| **Restaurants \u0026 food service** | Table turnover tracking, kitchen staff presence, restroom occupancy displays — no cameras in dining areas | Existing WiFi | Queue wait ±30s | [Table Turnover](docs/edge-modules/retail.md), [Queue Length](docs/edge-modules/retail.md) |\n| **Parking garages** | Pedestrian presence in stairwells and elevators where cameras have blind spots; security alert if someone lingers | Existing WiFi | Through-concrete walls | [Loitering](docs/edge-modules/security.md), [Elevator Count](docs/edge-modules/building.md) |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003e🏟️ Specialized\u003c/strong\u003e — Events, fitness, education, civic (CSI-capable hardware)\u003c/summary\u003e\n\n| Use Case | What It Does | Hardware | Key Metric | Edge Module |\n|----------|-------------|----------|------------|-------------|\n| **Smart home automation** | Room-level presence triggers (lights, HVAC, music) that work through walls — no dead zones, no motion-sensor timeouts | 2-3 ESP32-S3 nodes ($24) | Through-wall range ~5m | [HVAC Presence](docs/edge-modules/building.md), [Lighting Zones](docs/edge-modules/building.md) |\n| **Fitness \u0026 sports** | Rep counting, posture correction, breathing cadence during exercise — no wearable, no camera in locker rooms | 3+ ESP32-S3 mesh | Pose: 17 keypoints | [Breathing Sync](docs/edge-modules/exotic.md), [Gait Analysis](docs/edge-modules/medical.md) |\n| **Childcare \u0026 schools** | Naptime breathing monitoring, playground headcount, restricted-area alerts — privacy-safe for minors | 2-4 ESP32-S3 per zone | Breathing: ±1 BPM | [Sleep Apnea](docs/edge-modules/medical.md), [Perimeter Breach](docs/edge-modules/security.md) |\n| **Event venues \u0026 concerts** | Crowd density mapping, crush-risk detection via breathing compression, emergency evacuation flow tracking | Multi-AP mesh (4-8 APs) | Density per m² | [Customer Flow](docs/edge-modules/retail.md), [Panic Motion](docs/edge-modules/security.md) |\n| **Stadiums \u0026 arenas** | Section-level occupancy for dynamic pricing, concession staffing, emergency egress flow modeling | Enterprise AP grid | 15-20 per AP mesh | [Dwell Heatmap](docs/edge-modules/retail.md), [Queue Length](docs/edge-modules/retail.md) |\n| **Houses of worship** | Attendance counting without facial recognition — privacy-sensitive congregations, multi-room campus tracking | Existing WiFi | Zone-level accuracy | [Elevator Count](docs/edge-modules/building.md), [Energy Audit](docs/edge-modules/building.md) |\n| **Warehouse \u0026 logistics** | Worker safety zones, forklift proximity alerts, occupancy in hazardous areas — works through shelving and pallets | Industrial AP mesh | Alert latency \u003c500ms | [Forklift Proximity](docs/edge-modules/industrial.md), [Confined Space](docs/edge-modules/industrial.md) |\n| **Civic infrastructure** | Public restroom occupancy (no cameras possible), subway platform crowding, shelter headcount during emergencies | Municipal WiFi + ESP32 | Real-time headcount | [Customer Flow](docs/edge-modules/retail.md), [Loitering](docs/edge-modules/security.md) |\n| **Museums \u0026 galleries** | Visitor flow heatmaps, exhibit dwell time, crowd bottleneck alerts — no cameras near artwork (flash/theft risk) | Existing WiFi | Zone dwell ±5s | [Dwell Heatmap](docs/edge-modules/retail.md), [Shelf Engagement](docs/edge-modules/retail.md) |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003e🤖 Robotics \u0026 Industrial\u003c/strong\u003e — Autonomous systems, manufacturing, android spatial awareness\u003c/summary\u003e\n\nWiFi sensing gives robots and autonomous systems a spatial awareness layer that works where LIDAR and cameras fail — through dust, smoke, fog, and around corners. The CSI signal field acts as a \"sixth sense\" for detecting humans in the environment without requiring line-of-sight.\n\n| Use Case | What It Does | Hardware | Key Metric | Edge Module |\n|----------|-------------|----------|------------|-------------|\n| **Cobot safety zones** | Detect human presence near collaborative robots — auto-slow or stop before contact, even behind obstructions | 2-3 ESP32-S3 per cell | Presence latency \u003c100ms | [Forklift Proximity](docs/edge-modules/industrial.md), [Perimeter Breach](docs/edge-modules/security.md) |\n| **Warehouse AMR navigation** | Autonomous mobile robots sense humans around blind corners, through shelving racks — no LIDAR occlusion | ESP32 mesh along aisles | Through-shelf detection | [Forklift Proximity](docs/edge-modules/industrial.md), [Loitering](docs/edge-modules/security.md) |\n| **Android / humanoid spatial awareness** | Ambient human pose sensing for social robots — detect gestures, approach direction, and personal space without cameras always on | Onboard ESP32-S3 module | 17-keypoint pose | [Gesture Language](docs/edge-modules/exotic.md), [Emotion Detection](docs/edge-modules/exotic.md) |\n| **Manufacturing line monitoring** | Worker presence at each station, ergonomic posture alerts, headcount for shift compliance — works through equipment | Industrial AP per zone | Pose + breathing | [Confined Space](docs/edge-modules/industrial.md), [Gait Analysis](docs/edge-modules/medical.md) |\n| **Construction site safety** | Exclusion zone enforcement around heavy machinery, fall detection from scaffolding, personnel headcount | Ruggedized ESP32 mesh | Alert \u003c2s, through-dust | [Panic Motion](docs/edge-modules/security.md), [Structural Vibration](docs/edge-modules/industrial.md) |\n| **Agricultural robotics** | Detect farm workers near autonomous harvesters in dusty/foggy field conditions where cameras are unreliable | Weatherproof ESP32 nodes | Range ~10m open field | [Forklift Proximity](docs/edge-modules/industrial.md), [Rain Detection](docs/edge-modules/exotic.md) |\n| **Drone landing zones** | Verify landing area is clear of humans — WiFi sensing works in rain, dust, and low light where downward cameras fail | Ground ESP32 nodes | Presence: \u003e95% accuracy | [Perimeter Breach](docs/edge-modules/security.md), [Tailgating](docs/edge-modules/security.md) |\n| **Clean room monitoring** | Personnel tracking without cameras (particle contamination risk from camera fans) — gown compliance via pose | Existing cleanroom WiFi | No particulate emission | [Clean Room](docs/edge-modules/industrial.md), [Livestock Monitor](docs/edge-modules/industrial.md) |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003e🔥 Extreme\u003c/strong\u003e — Through-wall, disaster, defense, underground\u003c/summary\u003e\n\nThese scenarios exploit WiFi's ability to penetrate solid materials — concrete, rubble, earth — where no optical or infrared sensor can reach. The WiFi-Mat disaster module (ADR-001) is specifically designed for this tier.\n\n| Use Case | What It Does | Hardware | Key Metric | Edge Module |\n|----------|-------------|----------|------------|-------------|\n| **Search \u0026 rescue (WiFi-Mat)** | Detect survivors through rubble/debris via breathing signature, START triage color classification, 3D localization | Portable ESP32 mesh + laptop | Through 30cm concrete | [Respiratory Distress](docs/edge-modules/medical.md), [Seizure Detection](docs/edge-modules/medical.md) |\n| **Firefighting** | Locate occupants through smoke and walls before entry; breathing detection confirms life signs remotely | Portable mesh on truck | Works in zero visibility | [Sleep Apnea](docs/edge-modules/medical.md), [Panic Motion](docs/edge-modules/security.md) |\n| **Prison \u0026 secure facilities** | Cell occupancy verification, distress detection (abnormal vitals), perimeter sensing — no camera blind spots | Dedicated AP infrastructure | 24/7 vital signs | [Cardiac Arrhythmia](docs/edge-modules/medical.md), [Loitering](docs/edge-modules/security.md) |\n| **Military / tactical** | Through-wall personnel detection, room clearing confirmation, hostage vital signs at standoff distance | Directional WiFi + custom FW | Range: 5m through wall | [Perimeter Breach](docs/edge-modules/security.md), [Weapon Detection](docs/edge-modules/security.md) |\n| **Border \u0026 perimeter security** | Detect human presence in tunnels, behind fences, in vehicles — passive sensing, no active illumination to reveal position | Concealed ESP32 mesh | Passive / covert | [Perimeter Breach](docs/edge-modules/security.md), [Tailgating](docs/edge-modules/security.md) |\n| **Mining \u0026 underground** | Worker presence in tunnels where GPS/cameras fail, breathing detection after collapse, headcount at safety points | Ruggedized ESP32 mesh | Through rock/earth | [Confined Space](docs/edge-modules/industrial.md), [Respiratory Distress](docs/edge-modules/medical.md) |\n| **Maritime \u0026 naval** | Below-deck personnel tracking through steel bulkheads (limited range, requires tuning), man-overboard detection | Ship WiFi + ESP32 | Through 1-2 bulkheads | [Structural Vibration](docs/edge-modules/industrial.md), [Panic Motion](docs/edge-modules/security.md) |\n| **Wildlife research** | Non-invasive animal activity monitoring in enclosures or dens — no light pollution, no visual disturbance | Weatherproof ESP32 nodes | Zero light emission | [Livestock Monitor](docs/edge-modules/industrial.md), [Dream Stage](docs/edge-modules/exotic.md) |\n\n\u003c/details\u003e\n\n### Edge Intelligence ([ADR-041](docs/adr/ADR-041-wasm-module-collection.md))\n\nSmall programs that run directly on the ESP32 sensor — no internet needed, no cloud fees, instant response. Each module is a tiny WASM file (5-30 KB) that you upload to the device over-the-air. It reads WiFi signal data and makes decisions locally in under 10 ms. [ADR-041](docs/adr/ADR-041-wasm-module-collection.md) defines 60 modules across 13 categories — all 60 are implemented with 609 tests passing.\n\n| | Category | Examples |\n|---|----------|---------|\n| 🏥 | [**Medical \u0026 Health**](docs/edge-modules/medical.md) | Sleep apnea detection, cardiac arrhythmia, gait analysis, seizure detection |\n| 🔐 | [**Security \u0026 Safety**](docs/edge-modules/security.md) | Intrusion detection, perimeter breach, loitering, panic motion |\n| 🏢 | [**Smart Building**](docs/edge-modules/building.md) | Zone occupancy, HVAC control, elevator counting, meeting room tracking |\n| 🛒 | [**Retail \u0026 Hospitality**](docs/edge-modules/retail.md) | Queue length, dwell heatmaps, customer flow, table turnover |\n| 🏭 | [**Industrial**](docs/edge-modules/industrial.md) | Forklift proximity, confined space monitoring, structural vibration |\n| 🔮 | [**Exotic \u0026 Research**](docs/edge-modules/exotic.md) | Sleep staging, emotion detection, sign language, breathing sync |\n| 📡 | [**Signal Intelligence**](docs/edge-modules/signal-intelligence.md) | Cleans and sharpens raw WiFi signals — focuses on important regions, filters noise, fills in missing data, and tracks which person is which |\n| 🧠 | [**Adaptive Learning**](docs/edge-modules/adaptive-learning.md) | The sensor learns new gestures and patterns on its own over time — no cloud needed, remembers what it learned even after updates |\n| 🗺️ | [**Spatial Reasoning**](docs/edge-modules/spatial-temporal.md) | Figures out where people are in a room, which zones matter most, and tracks movement across areas using graph-based spatial logic |\n| ⏱️ | [**Temporal Analysis**](docs/edge-modules/spatial-temporal.md) | Learns daily routines, detects when patterns break (someone didn't get up), and verifies safety rules are being followed over time |\n| 🛡️ | [**AI Security**](docs/edge-modules/ai-security.md) | Detects signal replay attacks, WiFi jamming, injection attempts, and flags abnormal behavior that could indicate tampering |\n| ⚛️ | [**Quantum-Inspired**](docs/edge-modules/autonomous.md) | Uses quantum-inspired math to map room-wide signal coherence and search for optimal sensor configurations |\n| 🤖 | [**Autonomous \u0026 Exotic**](docs/edge-modules/autonomous.md) | Self-managing sensor mesh — auto-heals dropped nodes, plans its own actions, and explores experimental signal representations |\n\nAll implemented modules are `no_std` Rust, share a [common utility library](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/vendor_common.rs), and talk to the host through a 12-function API. Full documentation: [**Edge Modules Guide**](docs/edge-modules/README.md). See the [complete implemented module list](#edge-module-list) below.\n\n\u003cdetails id=\"edge-module-list\"\u003e\n\u003csummary\u003e\u003cstrong\u003e🧩 Edge Intelligence — \u003ca href=\"docs/edge-modules/README.md\"\u003eAll 65 Modules Implemented\u003c/a\u003e\u003c/strong\u003e (ADR-041 complete)\u003c/summary\u003e\n\nAll 60 modules are implemented, tested (609 tests passing), and ready to deploy. They compile to `wasm32-unknown-unknown`, run on ESP32-S3 via WASM3, and share a [common utility library](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/vendor_common.rs). Source: [`crates/wifi-densepose-wasm-edge/src/`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/)\n\n**Core modules** (ADR-040 flagship + early implementations):\n\n| Module | File | What It Does |\n|--------|------|-------------|\n| Gesture Classifier | [`gesture.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/gesture.rs) | DTW template matching for hand gestures |\n| Coherence Filter | [`coherence.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/coherence.rs) | Phase coherence gating for signal quality |\n| Adversarial Detector | [`adversarial.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/adversarial.rs) | Detects physically impossible signal patterns |\n| Intrusion Detector | [`intrusion.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/intrusion.rs) | Human vs non-human motion classification |\n| Occupancy Counter | [`occupancy.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/occupancy.rs) | Zone-level person counting |\n| Vital Trend | [`vital_trend.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/vital_trend.rs) | Long-term breathing and heart rate trending |\n| RVF Parser | [`rvf.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/rvf.rs) | RVF container format parsing |\n\n**Vendor-integrated modules** (24 modules, ADR-041 Category 7):\n\n**📡 Signal Intelligence** — Real-time CSI analysis and feature extraction\n\n| Module | File | What It Does | Budget |\n|--------|------|-------------|--------|\n| Flash Attention | [`sig_flash_attention.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/sig_flash_attention.rs) | Tiled attention over 8 subcarrier groups — finds spatial focus regions and entropy | S (\u003c5ms) |\n| Coherence Gate | [`sig_coherence_gate.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/sig_coherence_gate.rs) | Z-score phasor gating with hysteresis: Accept / PredictOnly / Reject / Recalibrate | L (\u003c2ms) |\n| Temporal Compress | [`sig_temporal_compress.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/sig_temporal_compress.rs) | 3-tier adaptive quantization (8-bit hot / 5-bit warm / 3-bit cold) | L (\u003c2ms) |\n| Sparse Recovery | [`sig_sparse_recovery.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/sig_sparse_recovery.rs) | ISTA L1 reconstruction for dropped subcarriers | H (\u003c10ms) |\n| Person Match | [`sig_mincut_person_match.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/sig_mincut_person_match.rs) | Hungarian-lite bipartite assignment for multi-person tracking | S (\u003c5ms) |\n| Optimal Transport | [`sig_optimal_transport.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/sig_optimal_transport.rs) | Sliced Wasserstein-1 distance with 4 projections | L (\u003c2ms) |\n\n**🧠 Adaptive Learning** — On-device learning without cloud connectivity\n\n| Module | File | What It Does | Budget |\n|--------|------|-------------|--------|\n| DTW Gesture Learn | [`lrn_dtw_gesture_learn.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/lrn_dtw_gesture_learn.rs) | User-teachable gesture recognition — 3-rehearsal protocol, 16 templates | S (\u003c5ms) |\n| Anomaly Attractor | [`lrn_anomaly_attractor.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/lrn_anomaly_attractor.rs) | 4D dynamical system attractor classification with Lyapunov exponents | H (\u003c10ms) |\n| Meta Adapt | [`lrn_meta_adapt.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/lrn_meta_adapt.rs) | Hill-climbing self-optimization with safety rollback | L (\u003c2ms) |\n| EWC Lifelong | [`lrn_ewc_lifelong.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/lrn_ewc_lifelong.rs) | Elastic Weight Consolidation — remembers past tasks while learning new ones | S (\u003c5ms) |\n\n**🗺️ Spatial Reasoning** — Location, proximity, and influence mapping\n\n| Module | File | What It Does | Budget |\n|--------|------|-------------|--------|\n| PageRank Influence | [`spt_pagerank_influence.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/spt_pagerank_influence.rs) | 4x4 cross-correlation graph with power iteration PageRank | L (\u003c2ms) |\n| Micro HNSW | [`spt_micro_hnsw.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/spt_micro_hnsw.rs) | 64-vector navigable small-world graph for nearest-neighbor search | S (\u003c5ms) |\n| Spiking Tracker | [`spt_spiking_tracker.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/spt_spiking_tracker.rs) | 32 LIF neurons + 4 output zone neurons with STDP learning | S (\u003c5ms) |\n\n**⏱️ Temporal Analysis** — Activity patterns, logic verification, autonomous planning\n\n| Module | File | What It Does | Budget |\n|--------|------|-------------|--------|\n| Pattern Sequence | [`tmp_pattern_sequence.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/tmp_pattern_sequence.rs) | Activity routine detection and deviation alerts | S (\u003c5ms) |\n| Temporal Logic Guard | [`tmp_temporal_logic_guard.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/tmp_temporal_logic_guard.rs) | LTL formula verification on CSI event streams | S (\u003c5ms) |\n| GOAP Autonomy | [`tmp_goap_autonomy.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/tmp_goap_autonomy.rs) | Goal-Oriented Action Planning for autonomous module management | S (\u003c5ms) |\n\n**🛡️ AI Security** — Tamper detection and behavioral anomaly profiling\n\n| Module | File | What It Does | Budget |\n|--------|------|-------------|--------|\n| Prompt Shield | [`ais_prompt_shield.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/ais_prompt_shield.rs) | FNV-1a replay detection, injection detection (10x amplitude), jamming (SNR) | L (\u003c2ms) |\n| Behavioral Profiler | [`ais_behavioral_profiler.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/ais_behavioral_profiler.rs) | 6D behavioral profile with Mahalanobis anomaly scoring | S (\u003c5ms) |\n\n**⚛️ Quantum-Inspired** — Quantum computing metaphors applied to CSI analysis\n\n| Module | File | What It Does | Budget |\n|--------|------|-------------|--------|\n| Quantum Coherence | [`qnt_quantum_coherence.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/qnt_quantum_coherence.rs) | Bloch sphere mapping, Von Neumann entropy, decoherence detection | S (\u003c5ms) |\n| Interference Search | [`qnt_interference_search.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/qnt_interference_search.rs) | 16 room-state hypotheses with Grover-inspired oracle + diffusion | S (\u003c5ms) |\n\n**🤖 Autonomous Systems** — Self-governing and self-healing behaviors\n\n| Module | File | What It Does | Budget |\n|--------|------|-------------|--------|\n| Psycho-Symbolic | [`aut_psycho_symbolic.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/aut_psycho_symbolic.rs) | 16-rule forward-chaining knowledge base with contradiction detection | S (\u003c5ms) |\n| Self-Healing Mesh | [`aut_self_healing_mesh.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/aut_self_healing_mesh.rs) | 8-node mesh with health tracking, degradation/recovery, coverage healing | S (\u003c5ms) |\n\n**🔮 Exotic (Vendor)** — Novel mathematical models for CSI interpretation\n\n| Module | File | What It Does | Budget |\n|--------|------|-------------|--------|\n| Time Crystal | [`exo_time_crystal.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/exo_time_crystal.rs) | Autocorrelation subharmonic detection in 256-frame history | S (\u003c5ms) |\n| Hyperbolic Space | [`exo_hyperbolic_space.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/exo_hyperbolic_space.rs) | Poincare ball embedding with 32 reference locations, hyperbolic distance | S (\u003c5ms) |\n\n**🏥 Medical \u0026 Health** (Category 1) — Contactless health monitoring\n\n| Module | File | What It Does | Budget |\n|--------|------|-------------|--------|\n| Sleep Apnea | [`med_sleep_apnea.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/med_sleep_apnea.rs) | Detects breathing pauses during sleep | S (\u003c5ms) |\n| Cardiac Arrhythmia | [`med_cardiac_arrhythmia.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/med_cardiac_arrhythmia.rs) | Monitors heart rate for irregular rhythms | S (\u003c5ms) |\n| Respiratory Distress | [`med_respiratory_distress.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/med_respiratory_distress.rs) | Alerts on abnormal breathing patterns | S (\u003c5ms) |\n| Gait Analysis | [`med_gait_analysis.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/med_gait_analysis.rs) | Tracks walking patterns and detects changes | S (\u003c5ms) |\n| Seizure Detection | [`med_seizure_detect.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/med_seizure_detect.rs) | 6-state machine for tonic-clonic seizure recognition | S (\u003c5ms) |\n\n**🔐 Security \u0026 Safety** (Category 2) — Perimeter and threat detection\n\n| Module | File | What It Does | Budget |\n|--------|------|-------------|--------|\n| Perimeter Breach | [`sec_perimeter_breach.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/sec_perimeter_breach.rs) | Detects boundary crossings with approach/departure | S (\u003c5ms) |\n| Weapon Detection | [`sec_weapon_detect.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/sec_weapon_detect.rs) | Metal anomaly detection via CSI amplitude shifts | S (\u003c5ms) |\n| Tailgating | [`sec_tailgating.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/sec_tailgating.rs) | Detects unauthorized follow-through at access points | S (\u003c5ms) |\n| Loitering | [`sec_loitering.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/sec_loitering.rs) | Alerts when someone lingers too long in a zone | S (\u003c5ms) |\n| Panic Motion | [`sec_panic_motion.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/sec_panic_motion.rs) | Detects fleeing, struggling, or panic movement | S (\u003c5ms) |\n\n**🏢 Smart Building** (Category 3) — Automation and energy efficiency\n\n| Module | File | What It Does | Budget |\n|--------|------|-------------|--------|\n| HVAC Presence | [`bld_hvac_presence.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/bld_hvac_presence.rs) | Occupancy-driven HVAC control with departure countdown | S (\u003c5ms) |\n| Lighting Zones | [`bld_lighting_zones.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/bld_lighting_zones.rs) | Auto-dim/off lighting based on zone activity | S (\u003c5ms) |\n| Elevator Count | [`bld_elevator_count.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/bld_elevator_count.rs) | Counts people entering/leaving with overload warning | S (\u003c5ms) |\n| Meeting Room | [`bld_meeting_room.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/bld_meeting_room.rs) | Tracks meeting lifecycle: start, headcount, end, availability | S (\u003c5ms) |\n| Energy Audit | [`bld_energy_audit.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/bld_energy_audit.rs) | Tracks after-hours usage and room utilization rates | S (\u003c5ms) |\n\n**🛒 Retail \u0026 Hospitality** (Category 4) — Customer insights without cameras\n\n| Module | File | What It Does | Budget |\n|--------|------|-------------|--------|\n| Queue Length | [`ret_queue_length.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/ret_queue_length.rs) | Estimates queue size and wait times | S (\u003c5ms) |\n| Dwell Heatmap | [`ret_dwell_heatmap.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/ret_dwell_heatmap.rs) | Shows where people spend time (hot/cold zones) | S (\u003c5ms) |\n| Customer Flow | [`ret_customer_flow.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/ret_customer_flow.rs) | Counts ins/outs and tracks net occupancy | S (\u003c5ms) |\n| Table Turnover | [`ret_table_turnover.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/ret_table_turnover.rs) | Restaurant table lifecycle: seated, dining, vacated | S (\u003c5ms) |\n| Shelf Engagement | [`ret_shelf_engagement.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/ret_shelf_engagement.rs) | Detects browsing, considering, and reaching for products | S (\u003c5ms) |\n\n**🏭 Industrial \u0026 Specialized** (Category 5) — Safety and compliance\n\n| Module | File | What It Does | Budget |\n|--------|------|-------------|--------|\n| Forklift Proximity | [`ind_forklift_proximity.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/ind_forklift_proximity.rs) | Warns when people get too close to vehicles | S (\u003c5ms) |\n| Confined Space | [`ind_confined_space.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/ind_confined_space.rs) | OSHA-compliant worker monitoring with extraction alerts | S (\u003c5ms) |\n| Clean Room | [`ind_clean_room.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/ind_clean_room.rs) | Occupancy limits and turbulent motion detection | S (\u003c5ms) |\n| Livestock Monitor | [`ind_livestock_monitor.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/ind_livestock_monitor.rs) | Animal presence, stillness, and escape alerts | S (\u003c5ms) |\n| Structural Vibration | [`ind_structural_vibration.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/ind_structural_vibration.rs) | Seismic events, mechanical resonance, structural drift | S (\u003c5ms) |\n\n**🔮 Exotic \u0026 Research** (Category 6) — Experimental sensing applications\n\n| Module | File | What It Does | Budget |\n|--------|------|-------------|--------|\n| Dream Stage | [`exo_dream_stage.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/exo_dream_stage.rs) | Contactless sleep stage classification (wake/light/deep/REM) | S (\u003c5ms) |\n| Emotion Detection | [`exo_emotion_detect.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/exo_emotion_detect.rs) | Arousal, stress, and calm detection from micro-movements | S (\u003c5ms) |\n| Gesture Language | [`exo_gesture_language.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/exo_gesture_language.rs) | Sign language letter recognition via WiFi | S (\u003c5ms) |\n| Music Conductor | [`exo_music_conductor.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/exo_music_conductor.rs) | Tempo and dynamic tracking from conducting gestures | S (\u003c5ms) |\n| Plant Growth | [`exo_plant_growth.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/exo_plant_growth.rs) | Monitors plant growth, circadian rhythms, wilt detection | S (\u003c5ms) |\n| Ghost Hunter | [`exo_ghost_hunter.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/exo_ghost_hunter.rs) | Environmental anomaly classification (draft/insect/wind/unknown) | S (\u003c5ms) |\n| Rain Detection | [`exo_rain_detect.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/exo_rain_detect.rs) | Detects rain onset, intensity, and cessation via signal scatter | S (\u003c5ms) |\n| Breathing Sync | [`exo_breathing_sync.rs`](rust-port/wifi-densepose-rs/crates/wifi-densepose-wasm-edge/src/exo_breathing_sync.rs) | Detects synchronized breathing between multiple people | S (\u003c5ms) |\n\n\u003c/details\u003e\n\n---\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003e🧠 Self-Learning WiFi AI (ADR-024)\u003c/strong\u003e — Adaptive recognition, self-optimization, and intelligent anomaly detection\u003c/summary\u003e\n\nEvery WiFi signal that passes through a room creates a unique fingerprint of that space. WiFi-DensePose already reads these fingerprints to track people, but until now it threw away the internal \"understanding\" after each reading. The Self-Learning WiFi AI captures and preserves that understanding as compact, reusable vectors — and continuously optimizes itself for each new environment.\n\n**What it does in plain terms:**\n- Turns any WiFi signal into a 128-number \"fingerprint\" that uniquely describes what's happening in a room\n- Learns entirely on its own from raw WiFi data — no cameras, no labeling, no human supervision needed\n- Recognizes rooms, detects intruders, identifies people, and classifies activities using only WiFi\n- Runs on an $8 ESP32 chip (the entire model fits in 55 KB of memory)\n- Produces both body pose tracking AND environment fingerprints in a single computation\n\n**Key Capabilities**\n\n| What | How it works | Why it matters |\n|------|-------------|----------------|\n| **Self-supervised learning** | The model watches WiFi signals and teaches itself what \"similar\" and \"different\" look like, without any human-labeled data | Deploy anywhere — just plug in a WiFi sensor and wait 10 minutes |\n| **Room identification** | Each room produces a distinct WiFi fingerprint pattern | Know which room someone is in without GPS or beacons |\n| **Anomaly detection** | An unexpected person or event creates a fingerprint that doesn't match anything seen before | Automatic intrusion and fall detection as a free byproduct |\n| **Person re-identification** | Each person disturbs WiFi in a slightly different way, creating a personal signature | Track individuals across sessions without cameras |\n| **Environment adaptation** | MicroLoRA adapters (1,792 parameters per room) fine-tune the model for each new space | Adapts to a new room with minimal data — 93% less than retraining from scratch |\n| **Memory preservation** | EWC++ regularization remembers what was learned during pretraining | Switching to a new task doesn't erase prior knowledge |\n| **Hard-negative mining** | Training focuses on the most confusing examples to learn faster | Better accuracy with the same amount of training data |\n\n**Architecture**\n\n```\nWiFi Signal [56 channels] → Transformer + Graph Neural Network\n                                  ├→ 128-dim environment fingerprint (for search + identification)\n                                  └→ 17-joint body pose (for human tracking)\n```\n\n**Quick Start**\n\n```bash\n# Step 1: Learn from raw WiFi data (no labels needed)\ncargo run -p wifi-densepose-sensing-server -- --pretrain --dataset data/csi/ --pretrain-epochs 50\n\n# Step 2: Fine-tune with pose labels for full capability\ncargo run -p wifi-densepose-sensing-server -- --train --dataset data/mmfi/ --epochs 100 --save-rvf model.rvf\n\n# Step 3: Use the model — extract fingerprints from live WiFi\ncargo run -p wifi-densepose-sensing-server -- --model model.rvf --embed\n\n# Step 4: Search — find similar environments or detect anomalies\ncargo run -p wifi-densepose-sensing-server -- --model model.rvf --build-index env\n```\n\n**Training Modes**\n\n| Mode | What you need | What you get |\n|------|--------------|-------------|\n| Self-Supervised | Just raw WiFi data | A model that understands WiFi signal structure |\n| Supervised | WiFi data + body pose labels | Full pose tracking + environment fingerprints |\n| Cross-Modal | WiFi data + camera footage | Fingerprints aligned with visual understanding |\n\n**Fingerprint Index Types**\n\n| Index | What it stores | Real-world use |\n|-------|---------------|----------------|\n| `env_fingerprint` | Average room fingerprint | \"Is this the kitchen or the bedroom?\" |\n| `activity_pattern` | Activity boundaries | \"Is someone cooking, sleeping, or exercising?\" |\n| `temporal_baseline` | Normal conditions | \"Something unusual just happened in this room\" |\n| `person_track` | Individual movement signatures | \"Person A just entered the living room\" |\n\n**Model Size**\n\n| Component | Parameters | Memory (on ESP32) |\n|-----------|-----------|-------------------|\n| Transformer backbone | ~28,000 | 28 KB |\n| Embedding projection head | ~25,000 | 25 KB |\n| Per-room MicroLoRA adapter | ~1,800 | 2 KB |\n| **Total** | **~55,000** | **55 KB** (of 520 KB available) |\n\nThe self-learning system builds on the [AI Backbone (RuVector)](#ai-backbone-ruvector) signal-processing layer — attention, graph algorithms, and compression — adding contrastive learning on top.\n\nSee [`docs/adr/ADR-024-contrastive-csi-embedding-model.md`](docs/adr/ADR-024-contrastive-csi-embedding-model.md) for full architectural details.\n\n\u003c/details\u003e\n\n---\n\n## 📦 Installation\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eGuided Installer\u003c/strong\u003e — Interactive hardware detection and profile selection\u003c/summary\u003e\n\n```bash\n./install.sh\n```\n\nThe installer walks through 7 steps: system detection, toolchain check, WiFi hardware scan, profile recommendation, dependency install, build, and verification.\n\n| Profile | What it installs | Size | Requirements |\n|---------|-----------------|------|-------------|\n| `verify` | Pipeline verification only | ~5 MB | Python 3.8+ |\n| `python` | Full Python API server + sensing | ~500 MB | Python 3.8+ |\n| `rust` | Rust pipeline (~810x faster) | ~200 MB | Rust 1.70+ |\n| `browser` | WASM for in-browser execution | ~10 MB | Rust + wasm-pack |\n| `iot` | ESP32 sensor mesh + aggregator | varies | Rust + ESP-IDF |\n| `docker` | Docker-based deployment | ~1 GB | Docker |\n| `field` | WiFi-Mat disaster response kit | ~62 MB | Rust + wasm-pack |\n| `full` | Everything available | ~2 GB | All toolchains |\n\n```bash\n# Non-interactive\n./install.sh --profile rust --yes\n\n# Hardware check only\n./install.sh --check-only\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eFrom Source\u003c/strong\u003e — Rust (primary) or Python\u003c/summary\u003e\n\n```bash\ngit clone https://github.com/ruvnet/RuView.git\ncd RuView\n\n# Rust (primary — 810x faster)\ncd rust-port/wifi-densepose-rs\ncargo build --release\ncargo test --workspace\n\n# Python (legacy v1)\npip install -r requirements.txt\npip install -e .\n\n# Or via pip\npip install wifi-densepose\npip install wifi-densepose[gpu]   # GPU acceleration\npip install wifi-densepose[all]   # All optional deps\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eDocker\u003c/strong\u003e — Pre-built images, no toolchain needed\u003c/summary\u003e\n\n```bash\n# Rust sensing server (132 MB — recommended)\ndocker pull ruvnet/wifi-densepose:latest\ndocker run -p 3000:3000 -p 3001:3001 -p 5005:5005/udp ruvnet/wifi-densepose:latest\n\n# Python sensing pipeline (569 MB)\ndocker pull ruvnet/wifi-densepose:python\ndocker run -p 8765:8765 -p 8080:8080 ruvnet/wifi-densepose:python\n\n# Both via docker-compose\ncd docker \u0026\u0026 docker compose up\n\n# Export RVF model\ndocker run --rm -v $(pwd):/out ruvnet/wifi-densepose:latest --export-rvf /out/model.rvf\n```\n\n| Image | Tag | Platforms | Ports |\n|-------|-----|-----------|-------|\n| `ruvnet/wifi-densepose` | `latest`, `rust` | linux/amd64, linux/arm64 | 3000 (REST), 3001 (WS), 5005/udp (ESP32) |\n| `ruvnet/wifi-densepose` | `python` | linux/amd64 | 8765 (WS), 8080 (UI) |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eSystem Requirements\u003c/strong\u003e\u003c/summary\u003e\n\n- **Rust**: 1.70+ (primary runtime — install via [rustup](https://rustup.rs/))\n- **Python**: 3.8+ (for verification and legacy v1 API)\n- **OS**: Linux (Ubuntu 18.04+), macOS (10.15+), Windows 10+\n- **Memory**: Minimum 4GB RAM, Recommended 8GB+\n- **Storage**: 2GB free space for models and data\n- **Network**: WiFi interface with CSI capability (optional — installer detects what you have)\n- **GPU**: Optional (NVIDIA CUDA or Apple Metal)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eRust Crates\u003c/strong\u003e — Individual crates on crates.io\u003c/summary\u003e\n\nThe Rust workspace consists of 15 crates, all published to [crates.io](https://crates.io/):\n\n```bash\n# Add individual crates to your Cargo.toml\ncargo add wifi-densepose-core       # Types, traits, errors\ncargo add wifi-densepose-signal     # CSI signal processing (6 SOTA algorithms)\ncargo add wifi-densepose-nn         # Neural inference (ONNX, PyTorch, Candle)\ncargo add wifi-densepose-vitals     # Vital sign extraction (breathing + heart rate)\ncargo add wifi-densepose-mat        # Disaster response (MAT survivor detection)\ncargo add wifi-densepose-hardware   # ESP32, Intel 5300, Atheros sensors\ncargo add wifi-densepose-train      # Training pipeline (MM-Fi dataset)\ncargo add wifi-densepose-wifiscan   # Multi-BSSID WiFi scanning\ncargo add wifi-densepose-ruvector   # RuVector v2.0.4 integration layer (ADR-017)\n```\n\n| Crate | Description | RuVector | crates.io |\n|-------|-------------|----------|-----------|\n| [`wifi-densepose-core`](https://crates.io/crates/wifi-densepose-core) | Foundation types, traits, and utilities | -- | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-core.svg)](https://crates.io/crates/wifi-densepose-core) |\n| [`wifi-densepose-signal`](https://crates.io/crates/wifi-densepose-signal) | SOTA CSI signal processing (SpotFi, FarSense, Widar 3.0) | `mincut`, `attn-mincut`, `attention`, `solver` | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-signal.svg)](https://crates.io/crates/wifi-densepose-signal) |\n| [`wifi-densepose-nn`](https://crates.io/crates/wifi-densepose-nn) | Multi-backend inference (ONNX, PyTorch, Candle) | -- | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-nn.svg)](https://crates.io/crates/wifi-densepose-nn) |\n| [`wifi-densepose-train`](https://crates.io/crates/wifi-densepose-train) | Training pipeline with MM-Fi dataset (NeurIPS 2023) | **All 5** | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-train.svg)](https://crates.io/crates/wifi-densepose-train) |\n| [`wifi-densepose-mat`](https://crates.io/crates/wifi-densepose-mat) | Mass Casualty Assessment Tool (disaster survivor detection) | `solver`, `temporal-tensor` | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-mat.svg)](https://crates.io/crates/wifi-densepose-mat) |\n| [`wifi-densepose-ruvector`](https://crates.io/crates/wifi-densepose-ruvector) | RuVector v2.0.4 integration layer — 7 signal+MAT integration points (ADR-017) | **All 5** | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-ruvector.svg)](https://crates.io/crates/wifi-densepose-ruvector) |\n| [`wifi-densepose-vitals`](https://crates.io/crates/wifi-densepose-vitals) | Vital signs: breathing (6-30 BPM), heart rate (40-120 BPM) | -- | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-vitals.svg)](https://crates.io/crates/wifi-densepose-vitals) |\n| [`wifi-densepose-hardware`](https://crates.io/crates/wifi-densepose-hardware) | ESP32, Intel 5300, Atheros CSI sensor interfaces | -- | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-hardware.svg)](https://crates.io/crates/wifi-densepose-hardware) |\n| [`wifi-densepose-wifiscan`](https://crates.io/crates/wifi-densepose-wifiscan) | Multi-BSSID WiFi scanning (Windows, macOS, Linux) | -- | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-wifiscan.svg)](https://crates.io/crates/wifi-densepose-wifiscan) |\n| [`wifi-densepose-wasm`](https://crates.io/crates/wifi-densepose-wasm) | WebAssembly bindings for browser deployment | -- | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-wasm.svg)](https://crates.io/crates/wifi-densepose-wasm) |\n| [`wifi-densepose-sensing-server`](https://crates.io/crates/wifi-densepose-sensing-server) | Axum server: UDP ingestion, WebSocket broadcast | -- | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-sensing-server.svg)](https://crates.io/crates/wifi-densepose-sensing-server) |\n| [`wifi-densepose-cli`](https://crates.io/crates/wifi-densepose-cli) | Command-line tool for MAT disaster scanning | -- | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-cli.svg)](https://crates.io/crates/wifi-densepose-cli) |\n| [`wifi-densepose-api`](https://crates.io/crates/wifi-densepose-api) | REST + WebSocket API layer | -- | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-api.svg)](https://crates.io/crates/wifi-densepose-api) |\n| [`wifi-densepose-config`](https://crates.io/crates/wifi-densepose-config) | Configuration management | -- | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-config.svg)](https://crates.io/crates/wifi-densepose-config) |\n| [`wifi-densepose-db`](https://crates.io/crates/wifi-densepose-db) | Database persistence (PostgreSQL, SQLite, Redis) | -- | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-db.svg)](https://crates.io/crates/wifi-densepose-db) |\n\nAll crates integrate with [RuVector v2.0.4](https://github.com/ruvnet/ruvector) — see [AI Backbone](#ai-backbone-ruvector) below.\n\n\u003c/details\u003e\n\n---\n\n## 🚀 Quick Start\n\n\u003cdetails open\u003e\n\u003csummary\u003e\u003cstrong\u003eFirst API call in 3 commands\u003c/strong\u003e\u003c/summary\u003e\n\n### 1. Install\n\n```bash\n# Fastest path — Docker\ndocker pull ruvnet/wifi-densepose:latest\ndocker run -p 3000:3000 ruvnet/wifi-densepose:latest\n\n# Or from source (Rust)\n./install.sh --profile rust --yes\n```\n\n### 2. Start the System\n\n```python\nfrom wifi_densepose import WiFiDensePose\n\nsystem = WiFiDensePose()\nsystem.start()\nposes = system.get_latest_poses()\nprint(f\"Detected {len(poses)} persons\")\nsystem.stop()\n```\n\n### 3. REST API\n\n```bash\n# Health check\ncurl http://localhost:3000/health\n\n# Latest sensing frame\ncurl http://localhost:3000/api/v1/sensing/latest\n\n# Vital signs\ncurl http://localhost:3000/api/v1/vital-signs\n\n# Pose estimation\ncurl http://localhost:3000/api/v1/pose/current\n\n# Server info\ncurl http://localhost:3000/api/v1/info\n```\n\n### 4. Real-time WebSocket\n\n```python\nimport asyncio, websockets, json\n\nasync def stream():\n    async with websockets.connect(\"ws://localhost:3001/ws/sensing\") as ws:\n        async for msg in ws:\n            data = json.loads(msg)\n            print(f\"Persons: {len(data.get('persons', []))}\")\n\nasyncio.run(stream())\n```\n\n\u003c/details\u003e\n\n---\n\n## 📋 Table of Contents\n\n\u003cdetails open\u003e\n\u003csummary\u003e\u003cstrong\u003e📡 Signal Processing \u0026 Sensing\u003c/strong\u003e — From raw WiFi frames to vital signs\u003c/summary\u003e\n\nThe signal processing stack transforms raw WiFi Channel State Information into actionable human sensing data. Starting from 56-192 subcarrier complex values captured at 20 Hz, the pipeline applies research-grade algorithms (SpotFi phase correction, Hampel outlier rejection, Fresnel zone modeling) to extract breathing rate, heart rate, motion level, and multi-person body pose — all in pure Rust with zero external ML dependencies.\n\n| Section | Description | Docs |\n|---------|-------------|------|\n| [Key Features](#key-features) | Sensing, Intelligence, and Performance \u0026 Deployment capabilities | — |\n| [How It Works](#how-it-works) | End-to-end pipeline: radio waves → CSI capture → signal processing → AI → pose + vitals | — |\n| [ESP32-S3 Hardware Pipeline](#esp32-s3-hardware-pipeline) | 20 Hz CSI streaming, binary frame parsing, flash \u0026 provision | [ADR-018](docs/adr/ADR-018-esp32-dev-implementation.md) · [Tutorial #34](https://github.com/ruvnet/RuView/issues/34) |\n| [Vital Sign Detection](#vital-sign-detection) | Breathing 6-30 BPM, heartbeat 40-120 BPM, FFT peak detection | [ADR-021](docs/adr/ADR-021-vital-sign-detection-rvdna-pipeline.md) |\n| [WiFi Scan Domain Layer](#wifi-scan-domain-layer) | 8-stage RSSI pipeline, multi-BSSID fingerprinting, Windows WiFi | [ADR-022](docs/adr/ADR-022-windows-wifi-enhanced-fidelity-ruvector.md) · [Tutorial #36](https://github.com/ruvnet/RuView/issues/36) |\n| [WiFi-Mat Disaster Response](#wifi-mat-disaster-response) | Search \u0026 rescue, START triage, 3D localization through debris | [ADR-001](docs/adr/ADR-001-wifi-mat-disaster-detection.md) · [User Guide](docs/wifi-mat-user-guide.md) |\n| [SOTA Signal Processing](#sota-signal-processing) | SpotFi, Hampel, Fresnel, STFT spectrogram, subcarrier selection, BVP | [ADR-014](docs/adr/ADR-014-sota-signal-processing.md) |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003e🧠 Models \u0026 Training\u003c/strong\u003e — DensePose pipeline, RVF containers, SONA adaptation, RuVector integration\u003c/summary\u003e\n\nThe neural pipeline uses a graph transformer with cross-attention to map CSI feature matrices to 17 COCO body keypoints and DensePose UV coordinates. Models are packaged as single-file `.rvf` containers with progressive loading (Layer A instant, Layer B warm, Layer C full). SONA (Self-Optimizing Neural Architecture) enables continuous on-device adaptation via micro-LoRA + EWC++ without catastrophic forgetting. Signal processing is powered by 5 [RuVector](https://github.com/ruvnet/ruvector) crates (v2.0.4) with 7 integration points across the Rust workspace, plus 6 additional vendored crates for inference and graph intelligence.\n\n| Section | Description | Docs |\n|---------|-------------|------|\n| [RVF Model Container](#rvf-model-container) | Binary packaging with Ed25519 signing, progressive 3-layer loading, SIMD quantization | [ADR-023](docs/adr/ADR-023-trained-densepose-model-ruvector-pipeline.md) |\n| [Training \u0026 Fine-Tuning](#training--fine-tuning) | 8-phase pure Rust pipeline (7,832 lines), MM-Fi/Wi-Pose pre-training, 6-term composite loss, SONA LoRA | [ADR-023](docs/adr/ADR-023-trained-densepose-model-ruvector-pipeline.md) |\n| [RuVector Crates](#ruvector-crates) | 11 vendored Rust crates from [ruvector](https://github.com/ruvnet/ruvector): attention, min-cut, solver, GNN, HNSW, temporal compression, sparse inference | [GitHub](https://github.com/ruvnet/ruvector) · [Source](vendor/ruvector/) |\n| [AI Backbone (RuVector)](#ai-backbone-ruvector) | 5 AI capabilities replacing hand-tuned thresholds: attention, graph min-cut, sparse solvers, tiered compression | [crates.io](https://crates.io/crates/wifi-densepose-ruvector) |\n| [Self-Learning WiFi AI (ADR-024)](#self-learning-wifi-ai-adr-024) | Contrastive self-supervised learning, room fingerprinting, anomaly detection, 55 KB model | [ADR-024](docs/adr/ADR-024-contrastive-csi-embedding-model.md) |\n| [Cross-Environment Generalization (ADR-027)](docs/adr/ADR-027-cross-environment-domain-generalization.md) | Domain-adversarial training, geometry-conditioned inference, hardware normalization, zero-shot deployment | [ADR-027](docs/adr/ADR-027-cross-environment-domain-generalization.md) |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003e🖥️ Usage \u0026 Configuration\u003c/strong\u003e — CLI flags, API endpoints, hardware setup\u003c/summary\u003e\n\nThe Rust sensing server is the primary interface, offering a comprehensive CLI with flags for data source selection, model loading, training, benchmarking, and RVF export. A REST API (Axum) and WebSocket server provide real-time data access. The Python v1 CLI remains available for legacy workflows.\n\n| Section | Description | Docs |\n|---------|-------------|------|\n| [CLI Usage](#cli-usage) | `--source`, `--train`, `--benchmark`, `--export-rvf`, `--model`, `--progressive` | — |\n| [REST API \u0026 WebSocket](#rest-api--websocket) | 6 REST endpoints (sensing, vitals, BSSID, SONA), WebSocket real-time stream | — |\n| [Hardware Support](#hardware-support-1) | ESP32-S3 ($8), Intel 5300 ($15), Atheros AR9580 ($20), Windows RSSI ($0) | [ADR-012](docs/adr/ADR-012-esp32-csi-sensor-mesh.md) · [ADR-013](docs/adr/ADR-013-feature-level-sensing-commodity-gear.md) |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003e⚙️ Development \u0026 Testing\u003c/strong\u003e — 542+ tests, CI, deployment\u003c/summary\u003e\n\nThe project maintains 542+ pure-Rust tests across 7 crate suites with zero mocks — every test runs against real algorithm implementations. Hardware-free simulation mode (`--source simulate`) enables full-stack testing without physical devices. Docker images are published on Docker Hub for zero-setup deployment.\n\n| Section | Description | Docs |\n|---------|-------------|------|\n| [Testing](#testing) | 7 test suites: sensing-server (229), signal (83), mat (139), wifiscan (91), RVF (16), vitals (18) | — |\n| [Deployment](#deployment) | Docker images (132 MB Rust / 569 MB Python), docker-compose, env vars | — |\n| [Contributing](#contributing) | Fork → branch → test → PR workflow, Rust and Python dev setup | — |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003e📊 Performance \u0026 Benchmarks\u003c/strong\u003e — Measured throughput, latency, resource usage\u003c/summary\u003e\n\nAll benchmarks are measured on the Rust sensing server using `cargo bench` and the built-in `--benchmark` CLI flag. The Rust v2 implementation delivers 810x end-to-end speedup over the Python v1 baseline, with motion detection reaching 5,400x improvement. The vital sign detector processes 11,665 frames/second in a single-threaded benchmark.\n\n| Section | Description | Key Metric |\n|---------|-------------|------------|\n| [Performance Metrics](#performance-metrics) | Vital signs, CSI pipeline, motion detection, Docker image, memory | 11,665 fps vitals · 54K fps pipeline |\n| [Rust vs Python](#python-vs-rust) | Side-by-side benchmarks across 5 operations | **810x** full pipeline speedup |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003e📄 Meta\u003c/strong\u003e — License, changelog, support\u003c/summary\u003e\n\nWiFi DensePose is MIT-licensed open source, developed by [ruvnet](https://github.com/ruvnet). The project has been in active development since March 2025, with 3 major releases delivering the Rust port, SOTA signal processing, disaster response module, and end-to-end training pipeline.\n\n| Section | Description | Link |\n|---------|-------------|------|\n| [Changelog](#changelog) | v3.0.0 (AETHER AI + Docker), v2.0.0 (Rust port + SOTA + WiFi-Mat) | [CHANGELOG.md](CHANGELOG.md) |\n| [License](#license) | MIT License | [LICENSE](LICENSE) |\n| [Support](#support) | Bug reports, feature requests, community discussion | [Issues](https://github.com/ruvnet/RuView/issues) · [Discussions](https://github.com/ruvnet/RuView/discussions) |\n\n\u003c/details\u003e\n\n---\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003e🌍 Cross-Environment Generalization (ADR-027 — Project MERIDIAN)\u003c/strong\u003e — Train once, deploy in any room without retraining\u003c/summary\u003e\n\n| What | How it works | Why it matters |\n|------|-------------|----------------|\n| **Gradient Reversal Layer** | An adversarial classifier tries to guess which room the signal came from; the main network is trained to fool it | Forces the model to discard room-specific shortcuts |\n| **Geometry Encoder (FiLM)** | Transmitter/receiver positions are Fourier-encoded and injected as scale+shift conditioning on every layer | The model knows *where* the hardware is, so it doesn't need to memorize layout |\n| **Hardware Normalizer** | Resamples any chipset's CSI to a canonical 56-subcarrier format with standardized amplitude | Intel 5300 and ESP32 data look identical to the model |\n| **Virtual Domain Augmentation** | Generates synthetic environments with random room scale, wall reflections, scatterers, and noise profiles | Training sees 1000s of rooms even with data from just 2-3 |\n| **Rapid Adaptation (TTT)** | Contrastive test-time training with LoRA weight generation from a few unlabeled frames | Zero-shot deployment — the model self-tunes on arrival |\n| **Cross-Domain Evaluator** | Leave-one-out evaluation across all training environments with per-environment PCK/OKS metrics | Proves generalization, not just memorization |\n\n**Architecture**\n\n```\nCSI Frame [any chipset]\n    │\n    ▼\nHardwareNormalizer ──→ canonical 56 subcarriers, N(0,1) amplitude\n    │\n    ▼\nCSI Encoder (existing) ──→ latent features\n    │\n    ├──→ Pose Head ──→ 17-joint pose (environment-invariant)\n    │\n    ├──→ Gradient Reversal Layer ──→ Domain Classifier (adversarial)\n    │         λ ramps 0→1 via cosine/exponential schedule\n    │\n    └──→ Geometry Encoder ──→ FiLM conditioning (scale + shift)\n              Fourier positional encoding → DeepSets → per-layer modulation\n```\n\n**Security hardening:**\n- Bounded calibration buffer (max 10,000 frames) prevents memory exhaustion\n- `adapt()` returns `Result\u003c_, AdaptError\u003e` — no panics on bad input\n- Atomic instance counter ensures unique weight initialization across threads\n- Division-by-zero guards on all augmentation parameters\n\nSee [`docs/adr/ADR-027-cross-environment-domain-generalization.md`](docs/adr/ADR-027-cross-environment-domain-generalization.md) for full architectural details.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003e🔍 Independent Capability Audit (ADR-028)\u003c/strong\u003e — 1,031 tests, SHA-256 proof, self-verifying witness bundle\u003c/summary\u003e\n\nA [3-agent parallel audit](docs/adr/ADR-028-esp32-capability-audit.md) independently verified every claim in this repository — ESP32 hardware, signal processing, neural networks, training pipeline, deployment, and security. Results:\n\n```\nRust tests:     1,031 passed, 0 failed\nPython proof:   VERDICT: PASS (SHA-256: 8c0680d7...)\nBundle verify:  7/7 checks PASS\n```\n\n**33-row attestation matrix:** 31 capabilities verified YES, 2 not measured at audit time (benchmark throughput, Kubernetes deploy).\n\n**Verify it yourself** (no hardware needed):\n```bash\n# Run all tests\ncd rust-port/wifi-densepose-rs \u0026\u0026 cargo test --workspace --no-default-features\n\n# Run the deterministic proof\npython v1/data/proof/verify.py\n\n# Generate + verify the witness bundle\nbash scripts/generate-witness-bundle.sh\ncd dist/witness-bundle-ADR028-*/ \u0026\u0026 bash VERIFY.sh\n```\n\n| Document | What it contains |\n|----------|-----------------|\n| [ADR-028](docs/adr/ADR-028-esp32-capability-audit.md) | Full audit: ESP32 specs, signal algorithms, NN architectures, training phases, deployment infra |\n| [Witness Log](docs/WITNESS-LOG-028.md) | 11 reproducible verification steps + 33-row attestation matrix with evidence per row |\n| [`generate-witness-bundle.sh`](scripts/generate-witness-bundle.sh) | Creates self-contained tar.gz with test logs, proof output, firmware hashes, crate versions, VERIFY.sh |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003e📡 Multistatic Sensing (ADR-029/030/031 — Project RuvSense + RuView)\u003c/strong\u003e — Multiple ESP32 nodes fuse viewpoints for production-grade pose, tracking, and exotic sensing\u003c/summary\u003e\n\nA single WiFi receiver can track people, but has blind spots — limbs behind the torso are invisible, depth is ambiguous, and two people at similar range create overlapping signals. RuvSense solves this by coordinating multiple ESP32 nodes into a **multistatic mesh** where every node acts as both transmitter and receiver, creating N×(N-1) measurement links from N devices.\n\n**What it does in plain terms:**\n- 4 ESP32-S3 nodes ($48 total) provide 12 TX-RX measurement links covering 360 degrees\n- Each node hops across WiFi channels 1/6/11, tripling effective bandwidth from 20→60 MHz\n- Coherence gating rejects noisy frames automatically — no manual tuning, stable for days\n- Two-person tracking at 20 Hz with zero identity swaps over 10 minutes\n- The room itself becomes a persistent model — the system remembers, predicts, and explains\n\n**Three ADRs, one pipeline:**\n\n| ADR | Codename | What it adds |\n|-----|----------|-------------|\n| [ADR-029](docs/adr/ADR-029-ruvsense-multistatic-sensing-mode.md) | **RuvSense** | Channel hopping, TDM protocol, multi-node fusion, coherence gating, 17-keypoint Kalman tracker |\n| [ADR-030](docs/adr/ADR-030-ruvsense-persistent-field-model.md) | **RuvSense Field** | Room electromagnetic eigenstructure (SVD), RF tomography, longitudinal drift detection, intention prediction, gesture recognition, adversarial detection |\n| [ADR-031](docs/adr/ADR-031-ruview-sensing-first-rf-mode.md) | **RuView** | Cross-viewpoint attention with geometric bias, viewpoint diversity optimization, embedding-level fusion |\n\n**Architecture**\n\n```\n4x ESP32-S3 nodes ($48)     TDM: each transmits in turn, all others receive\n        │                    Channel hop: ch1→ch6→ch11 per dwell (50ms)\n        ▼\nPer-Node Signal Processing   Phase sanitize → Hampel → BVP → subcarrier select\n        │                    (ADR-014, unchanged per viewpoint)\n        ▼\nMulti-Band Frame Fusion      3 channels × 56 subcarriers = 168 virtual subcarriers\n        │                    Cross-channel phase alignment via NeumannSolver\n        ▼\nMultistatic Viewpoint Fusion  N nodes → attention-weighted fusion → single embedding\n        │                    Geometric bias from node placement angles\n        ▼\nCoherence Gate               Accept / PredictOnly / Reject / Recalibrate\n        │                    Prevents model drift, stable for days\n        ▼\nPersistent Field Model       SVD baseline → body = observation - environment\n        │                    RF tomography, drift detection, intention signals\n        ▼\nPose Tracker + DensePose     17-keypoint Kalman, re-ID via AETHER embeddings\n                             Multi-person min-cut separation, zero ID swaps\n```\n\n**Seven Exotic Sensing Tiers (ADR-030)**\n\n| Tier | Capability | What it detects |\n|------|-----------|-----------------|\n| 1 | Field Normal Modes | Room electromagnetic eigenstructure via SVD |\n| 2 | Coarse RF Tomography | 3D occupancy volume from link attenuations |\n| 3 | Intention Lead Signals | Pre-movement prediction 200-500ms before action |\n| 4 | Longitudinal Biomechanics | Personal movement changes over days/weeks |\n| 5 | Cross-Room Continuity | Identity preserved across rooms without cameras |\n| 6 | Invisible Interaction | Multi-user gesture control through walls |\n| 7 | Adversarial Detection | Physically impossible signal identification |\n\n**Acceptance Test**\n\n| Metric | Threshold | What it proves |\n|--------|-----------|---------------|\n| Torso keypoint jitter | \u003c 30mm RMS | Precision sufficient for applications |\n| Identity swaps | 0 over 10 minutes (12,000 frames) | Reliable multi-person tracking |\n| Update rate | 20 Hz (50ms cycle) | Real-time response |\n| Breathing SNR | \u003e 10 dB at 3m | Small-motion sensitivity confirmed |\n\n**New Rust modules (9,000+ lines)**\n\n| Crate | New modules | Purpose |\n|-------|------------|---------|\n| `wifi-densepose-signal` | `ruvsense/` (10 modules) | Multiband fusion, phase alignment, multistatic fusion, coherence, field model, tomography, longitudinal drift, intention detection |\n| `wifi-densepose-ruvector` | `viewpoint/` (5 modules) | Cross-viewpoint attention with geometric bias, diversity index, coherence gating, fusion orchestrator |\n| `wifi-densepose-hardware` | `esp32/tdm.rs` | TDM sensing protocol, sync beacons, clock drift compensation |\n\n**Firmware extensions (C, backward-compatible)**\n\n| File | Addition |\n|------|---------|\n| `csi_collector.c` | Channel hop table, timer-driven hop, NDP injection stub |\n| `nvs_config.c` | 5 new NVS keys: hop_count, channel_list, dwell_ms, tdm_slot, tdm_node_count |\n\n**DDD Domain Model** — 6 bounded contexts: Multistatic Sensing, Coherence, Pose Tracking, Field Model, Cross-Room Identity, Adversarial Detection. Full specification: [`docs/ddd/ruvsense-domain-model.md`](docs/ddd/ruvsense-domain-model.md).\n\nSee the ADR documents for full architectural details, GOAP integration plans, and research references.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e🔮 Signal-Line Protocol (CRV)\u003c/b\u003e\u003c/summary\u003e\n\n### 6-Stage CSI Signal Line\n\nMaps the CRV (Coordinate Remote Viewing) signal-line methodology to WiFi CSI processing via `ruvector-crv`:\n\n| Stage | CRV Name | WiFi CSI Mapping | ruvector Component |\n|-------|----------|-----------------|-------------------|\n| I | Ideograms | Raw CSI gestalt (manmade/natural/movement/energy) | Poincare ball hyperbolic embeddings |\n| II | Sensory | Amplitude textures, phase patterns, frequency colors | Multi-head attention vectors |\n| III | Dimensional | AP mesh spatial topology, node geometry | GNN graph topology |\n| IV | Emotional/AOL | Coherence gating — signal vs noise separation | SNN temporal encoding |\n| V | Interrogation | Cross-stage probing — query pose against CSI history | Differentiable search |\n| VI | 3D Model | Composite person estimation, MinCut partitioning | Graph partitioning |\n\n**Cross-Session Convergence**: When multiple AP clusters observe the same person, CRV convergence analysis finds agreement in their signal embeddings — directly mapping to cross-room identity continuity.\n\n```rust\nuse wifi_densepose_ruvector::crv::WifiCrvPipeline;\n\nlet mut pipeline = WifiCrvPipeline::new(WifiCrvConfig::default());\npipeline.create_session(\"room-a\", \"person-001\")?;\n\n// Process CSI frames through 6-stage pipeline\nlet result = pipeline.process_csi_frame(\"room-a\", \u0026amplitudes, \u0026phases)?;\n// result.gestalt = Movement, confidence = 0.87\n// result.sensory_embedding = [0.12, -0.34, ...]\n\n// Cross-room identity matching via convergence\nlet convergence = pipeline.find_cross_room_convergence(\"person-001\", 0.75)?;\n```\n\n**Architecture**:\n- `CsiGestaltClassifier` — Maps CSI amplitude/phase patterns to 6 gestalt types\n- `CsiSensoryEncoder` — Extracts texture/color/temperature/luminosity features from subcarriers\n- `MeshTopologyEncoder` — Encodes AP mesh as GNN graph (Stage III)\n- `CoherenceAolDetector` — Maps coherence gate states to AOL noise detection (Stage IV)\n- `WifiCrvPipeline` — Orchestrates all 6 stages into unified sensing session\n\n\u003c/details\u003e\n\n---\n\n## 📡 Signal Processing \u0026 Sensing\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003ca id=\"esp32-s3-hardware-pipeline\"\u003e\u003c/a\u003e\u003cstrong\u003e📡 ESP32-S3 Hardware Pipeline (ADR-018)\u003c/strong\u003e — 28 Hz CSI streaming, flash \u0026 provision\u003c/summary\u003e\n\nA single ESP32-S3 board (~$9) captures WiFi signal data 28 times per second and streams it over UDP. A host server can visualize and record the data, but the ESP32 can also run on its own — detecting presence, measuring breathing and heart rate, and alerting on falls without any server at all.\n\n```\nESP32-S3 node                    UDP/5005        Host server (optional)\n┌───────────────────────┐      ──────────\u003e      ┌──────────────────────┐\n│ Captures WiFi signals │      binary frames    │ Parses frames        │\n│ 28 Hz, up to 192 sub- │      or 32-byte       │ Visualizes poses     │\n│ carriers per frame     │      vitals packets   │ Records CSI data     │\n│                        │                       │ REST API + WebSocket │\n│ On-device (optional):  │                       └──────────────────────┘\n│  Presence detection    │\n│  Breathing + heart rate│\n│  Fall detection        │\n│  WASM custom modules   │\n└───────────────────────┘\n```\n\n| Metric | Measured on hardware |\n|--------|----------------------|\n| CSI frame rate | 28.5 Hz (channel 5, BW20) |\n| Subcarriers per frame | 64 / 128 / 192 (depends on WiFi mode) |\n| UDP latency | \u003c 1 ms on local network |\n| Presence detection range | Reliable at 3 m through walls |\n| Binary size | 947 KB (fits in 1 MB flash partition) |\n| Boot to ready | ~3.9 seconds |\n\n### Flash and provision\n\nDownload a pre-built binary — no build toolchain needed:\n\n| Release | What's included | Tag |\n|---------|-----------------|-----|\n| [v0.2.0](https://github.com/ruvnet/RuView/releases/tag/v0.2.0-esp32) | Stable — raw CSI streaming, multi-node TDM, channel hopping | `v0.2.0-esp32` |\n| [v0.3.0-alpha](https://github.com/ruvnet/RuView/releases/tag/v0.3.0-alpha-esp32) | Alpha — adds on-device edge intelligence and WASM modules ([ADR-039](docs/adr/ADR-039-esp32-edge-intelligence.md), [ADR-040](docs/adr/ADR-040-wasm-programmable-sensing.md)) | `v0.3.0-alpha-esp32` |\n\n```bash\n# 1. Flash the firmware to your ESP32-S3\npython -m esptool --chip esp32s3 --port COM7 --baud 460800 \\\n  write_flash --flash_mode dio --flash_size 8MB \\\n  0x0 bootloader.bin 0x8000 partition-table.bin 0x10000 esp32-csi-node.bin\n\n# 2. Set WiFi credentials and server address (stored in flash, survives reboots)\npython firmware/esp32-csi-node/provision.py --port COM7 \\\n  --ssid \"YourWiFi\" --password \"secret\" --target-ip 192.168.1.20\n\n# 3. (Optional) Start the host server to visualize data\ncargo run -p wifi-densepose-sensing-server -- --http-port 3000 --source auto\n# Open http://localhost:3000\n```\n\n### Multi-node mesh\n\nFor better accuracy and room coverage, deploy 3-6 nodes with time-division multiplexing (TDM) so they take turns transmitting:\n\n```bash\n# Node 0 of a 3-node mesh\npython firmware/esp32-csi-node/provision.py --port COM7 \\\n  --ssid \"YourWiFi\" --password \"secret\" --target-ip 192.168.1.20 \\\n  --node-id 0 --tdm-slot 0 --tdm-total 3\n\n# Node 1\npython firmware/esp32-csi-node/provision.py --port COM8 \\\n  --ssid \"YourWiFi\" --password \"secret\" --target-ip 192.168.1.20 \\\n  --node-id 1 --tdm-slot 1 --tdm-total 3\n```\n\nNodes can also hop across WiFi channels (1, 6, 11) to increase sensing bandwidth — configured via [ADR-029](docs/adr/ADR-029-ruvsense-multistatic-sensing-mode.md) channel hopping.\n\n### On-device intelligence (v0.3.0-alpha)\n\nThe alpha firmware can analyze signals locally and send compact results instead of raw data. This means the ESP32 works standalone — no server needed for basic sensing. Disabled by default for backward compatibility.\n\n| Tier | What it does | RAM used |\n|------|-------------|----------|\n| **0** | Off — streams raw CSI only (same as v0.2.0) | 0 KB |\n| **1** | Cleans up signals, picks the best subcarriers, compresses data (saves 30-50% bandwidth) | ~30 KB |\n| **2** | Everything in Tier 1 + detects presence, measures breathing and heart rate, detects falls | ~33 KB |\n| **3** | Everything in Tier 2 + runs custom WASM modules (gesture recognition, intrusion detection, and [63 more](docs/edge-modules/README.md)) | ~160 KB/module |\n\nEnable without reflashing — just reprovision:\n\n```bash\n# Turn on Tier 2 (vitals) on an already-flashed node\npython firmware/esp32-csi-node/provision.py --port COM7 \\\n  --ssid \"YourWiFi\" --password \"secret\" --target-ip 192.168.1.20 \\\n  --edge-tier 2\n\n# Fine-tune detection thresholds\npython firmware/esp32-csi-node/provision.py --port COM7 \\\n  --edge-tier 2 --vital-int 500 --fall-thresh 5000 --subk-count 16\n```\n\nWhen Tier 2 is active, the node sends a 32-byte vitals packet once per second containing: presence, motion level, breathing BPM, heart rate BPM, confidence scores, fall alert flag, and occupancy count.\n\nSee [firmware/esp32-csi-node/README.md](firmware/esp32-csi-node/README.md), [ADR-039](docs/adr/ADR-039-esp32-edge-intelligence.md), [ADR-044](docs/adr/ADR-044-provisioning-tool-enhancements.md), and [Tutorial #34](https://github.com/ruvnet/RuView/issues/34).\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003e🦀 Rust Implementation (v2)\u003c/strong\u003e — 810x faster, 54K fps pipeline\u003c/summary\u003e\n\n### Performance Benchmarks (Validated)\n\n| Operation | Python (v1) | Rust (v2) | Speedup |\n|-----------|-------------|-----------|---------|\n| CSI Preprocessing (4x64) | ~5ms | **5.19 µs** | ~1000x |\n| Phase Sanitization (4x64) | ~3ms | **3.84 µs** | ~780x |\n| Feature Extraction (4x64) | ~8ms | **9.03 µs** | ~890x |\n| Motion Detection | ~1ms | **186 ns** | ~5400x |\n| **Full Pipeline** | ~15ms | **18.47 µs** | ~810x |\n| **Vital Signs** | N/A | **86 µs** | 11,665 fps |\n\n| Resource | Python (v1) | Rust (v2) |\n|----------|-------------|-----------|\n| Memory | ~500 MB | ~100 MB |\n| Docker Image | 569 MB | 132 MB |\n| Tests | 41 | 542+ |\n| WASM Support | No | Yes |\n\n```bash\ncd rust-port/wifi-densepose-rs\ncargo build --release\ncargo test --workspace\ncargo bench --package wifi-densepose-signal\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003ca id=\"vital-sign-detection\"\u003e\u003c/a\u003e\u003cstrong\u003e💓 Vital Sign Detection (ADR-021)\u003c/strong\u003e — Breathing and heartbeat via FFT\u003c/summary\u003e\n\n| Capability | Range | Method |\n|------------|-------|--------|\n| **Breathing Rate** | 6-30 BPM (0.1-0.5 Hz) | Bandpass filter + FFT peak detection |\n| **Heart Rate** | 40-120 BPM (0.8-2.0 Hz) | Bandpass filter + FFT peak detection |\n| **Sampling Rate** | 20 Hz (ESP32 CSI) | Real-time streaming |\n| **Confidence** | 0.0-1.0 per sign | Spectral coherence + signal quality |\n\n```bash\n./target/release/sensing-server --source simulate --http-port 3000 --ws-port 3001 --ui-path ../../ui\ncurl http://localhost:3000/api/v1/vital-signs\n```\n\nSee [ADR-021](docs/adr/ADR-021-vital-sign-detection-rvdna-pipeline.md).\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003ca id=\"wifi-scan-domain-layer\"\u003e\u003c/a\u003e\u003cstrong\u003e📡 WiFi Scan Domain Layer (ADR-022/025)\u003c/strong\u003e — 8-stage RSSI pipeline for Windows, macOS, and Linux WiFi\u003c/summary\u003e\n\n| Stage | Purpose |\n|-------|---------|\n| **Predictive Gating** | Pre-filter scan results using temporal prediction |\n| **Attention Weighting** | Weight BSSIDs by signal relevance |\n| **Spatial Correlation** | Cross-AP spatial signal correlation |\n| **Motion Estimation** | Detect movement from RSSI variance |\n| **Breathing Extraction** | Extract respiratory rate from sub-Hz oscillations |\n| **Quality Gating** | Reject low-confidence estimates |\n| **Fingerprint Matching** | Location and posture classification via RF fingerprints |\n| **Orchestration** | Fuse all stages into unified sensing output |\n\n```bash\ncargo test -p wifi-densepose-wifiscan\n```\n\nSee [ADR-022](docs/adr/ADR-022-windows-wifi-enhanced-fidelity-ruvector.md) and [Tutorial #36](https://github.com/ruvnet/RuView/issues/36).\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003ca id=\"wifi-mat-disaster-response\"\u003e\u003c/a\u003e\u003cstrong\u003e🚨 WiFi-Mat: Disaster Response\u003c/strong\u003e — Search \u0026 rescue, START triage, 3D localization\u003c/summary\u003e\n\nWiFi signals penetrate non-metallic debris (concrete, wood, drywall) where cameras and thermal sensors cannot reach. The WiFi-Mat module (`wifi-densepose-mat`, 139 tests) uses CSI analysis to detect survivors trapped under rubble, classify their condition using the START triage protocol, and estimate their 3D position — giving rescue teams actionable intelligence within seconds of deployment.\n\n| Capability | How It Works | Performance Target |\n|------------|-------------|-------------------|\n| **Breathing Detection** | Bandpass 0.07-1.0 Hz + Fresnel zone modeling detects chest displacement of 5-10mm at 5 GHz | 4-60 BPM, \u003c500ms latency |\n| **Heartbeat Detection** | Micro-Doppler shift extraction from fine-grained CSI phase variation | Via ruvector-temporal-tensor |\n| **3D Localization** | Multi-AP triangulation + CSI fingerprint matching + depth estimation through rubble layers | 3-5m penetration |\n| **START Triage** | Ensemble classifier votes on breathing + movement + vital stability → P1-P4 priority | \u003c1% false negative |\n| **Zone Scanning** | 16+ concurrent scan zones with periodic re-scan and audit logging | Full disaster site |\n\n**Triage classification (START protocol compatible):**\n\n| Status | Color | Detection Criteria | Priority |\n|--------|-------|-------------------|----------|\n| Immediate | Red | Breathing detected, no movement | P1 |\n| Delayed | Yellow | Movement + breathing, stable vitals | P2 |\n| Minor | Green | Strong movement, responsive patterns | P3 |\n| Deceased | Black | No vitals for \u003e30 min continuous scan | P4 |\n\n**Deployment modes:** portable (single TX/RX handheld), distributed (multiple APs around collapse site), drone-mounted (UAV scanning), vehicle-mounted (mobile command post).\n\n```rust\nuse wifi_densepose_mat::{DisasterResponse, DisasterConfig, DisasterType, ScanZone, ZoneBounds};\n\nlet config = DisasterConfig::builder()\n    .disaster_type(DisasterType::Earthquake)\n    .sensitivity(0.85)\n    .max_depth(5.0)\n    .build();\n\nlet mut response = DisasterResponse::new(config);\nresponse.initialize_event(location, \"Building collapse\")?;\nresponse.add_zone(ScanZone::new(\"North Wing\", ZoneBounds::rectangle(0.0, 0.0, 30.0, 20.0)))?;\nresponse.start_scanning().await?;\n```\n\n**Safety guarantees:** fail-safe defaults (assume life present on ambiguous signals), redundant multi-algorithm voting, complete audit trail, offline-capable (no network required).\n\n- [WiFi-Mat User Guide](docs/wifi-mat-user-guide.md) | [ADR-001](docs/adr/ADR-001-wifi-mat-disaster-detection.md) | [Domain Model](docs/ddd/wifi-mat-domain-model.md)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003ca id=\"sota-signal-processing\"\u003e\u003c/a\u003e\u003cstrong\u003e🔬 SOTA Signal Processing (ADR-014)\u003c/strong\u003e — 6 research-grade algorithms\u003c/summary\u003e\n\nThe signal processing layer bridges the gap between raw commodity WiFi hardware output and research-grade sensing accuracy. Each algorithm addresses a specific limitation of naive CSI processing — from hardware-induced phase corruption to environment-dependent multipath interference. All six are implemented in `wifi-densepose-signal/src/` with deterministic tests and no mock data.\n\n| Algorithm | What It Does | Why It Matters | Math | Source |\n|-----------|-------------|----------------|------|--------|\n| **Conjugate Multiplication** | Multiplies CSI antenna pairs: `H₁[k] × conj(H₂[k])` | Cancels CFO, SFO, and packet detection delay that corrupt raw phase — preserves only environment-caused phase differences | `CSI_ratio[k] = H₁[k] * conj(H₂[k])` | [SpotFi](https://dl.acm.org/doi/10.1145/2789168.2790124) (SIGCOMM 2015) |\n| **Hampel Filter** | Replaces outliers using running median ± scaled MAD | Z-score uses mean/std which are corrupted by the very outliers it detects (masking effect). Hampel uses median/MAD, resisting up to 50% contamination | `σ̂ = 1.4826 × MAD` | Standard DSP; WiGest (2015) |\n| **Fresnel Zone Model** | Models signal variation from chest displacement crossing Fresnel zone boundaries | Zero-crossing counting fails in multipath-rich environments. Fresnel predicts *where* breathing should appear based on TX-RX-body geometry | `ΔΦ = 2π × 2Δd / λ`, `A = \\|sin(ΔΦ/2)\\|` | [FarSense](https://dl.acm.org/doi/10.1145/3300061.3345431) (MobiCom 2019) |\n| **CSI Spectrogram** | Sliding-window FFT (STFT) per subcarrier → 2D time-frequency matrix | Breathing = 0.2-0.4 Hz band, walking = 1-2 Hz, static = noise. 2D structure enables CNN spatial pattern recognition that 1D features miss | `S[t,f] = \\|Σₙ x[n] w[n-t] e^{-j2πfn}\\|²` | Standard since 2018 |\n| **Subcarrier Selection** | Ranks subcarriers by motion sensitivity (variance ratio) and selects top-K | Not all subcarriers respond to motion — some sit in multipath nulls. Selecting the 10-20 most sensitive improves SNR by 6-10 dB | `sensitivity[k] = var_motion / var_static` | [WiDance](https://dl.acm.org/doi/10.1145/3117811.3117826) (MobiCom 2017) |\n| **Body Velocity Profile** | Extracts velocity distribution from Doppler shifts across subcarriers | BVP is domain-independent — same velocity profile regardless of room layout, furniture, or AP placement. Basis for cross-environment recognition | `BVP[v,t] = Σₖ \\|STFTₖ[v,t]\\|` | [Widar 3.0](https://dl.acm.org/doi/10.1145/3328916) (MobiSys 2019) |\n\n**Processing pipeline order:** Raw CSI → Conjugate multiplication (phase cleaning) → Hampel filter (outlier removal) → Subcarrier selection (top-K) → CSI spectrogram (time-frequency) → Fresnel model (breathing) + BVP (activity)\n\nSee [ADR-014](docs/adr/ADR-014-sota-signal-processing.md) for full mathematical derivations.\n\n\u003c/details\u003e\n\n---\n\n## 🧠 Models \u0026 Training\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003ca id=\"ai-backbone-ruvector\"\u003e\u003c/a\u003e\u003cstrong\u003e🤖 AI Backbone: RuVector\u003c/strong\u003e — Attention, graph algorithms, and edge-AI compression powering the sensing pipeline\u003c/summary\u003e\n\nRaw WiFi signals are noisy, redundant, and environment-dependent. [RuVector](https://github.com/ruvnet/ruvector) is the AI intelligence layer that transforms them into clean, structured input for the DensePose neural network. It uses **attention mechanisms** to learn which signals to trust, **graph algorithms** that automatically discover which WiFi channels are sensitive to body motion, and **compressed representations** that make edge inference possible on an $8 microcontroller.\n\nWithout RuVector, WiFi DensePose would need hand-tuned thresholds, brute-force matrix math, and 4x more memory — making real-time edge inference impossible.\n\n```\nRaw WiFi CSI (56 subcarriers, noisy)\n    |\n    +-- ruvector-mincut ---------- Which channels carry body-motion signal? (learned graph partitioning)\n    +-- ruvector-attn-mincut ----- Which time frames are signal vs noise? (attention-gated filtering)\n    +-- ruvector-attention ------- How to fuse multi-antenna data? (learned weighted aggregation)\n    |\n    v\nClean, structured signal --\u003e DensePose Neural Network --\u003e 17-keypoint body pose\n                         --\u003e FFT Vital Signs -----------\u003e breathing rate, heart rate\n                         --\u003e ruvector-solver ------------\u003e physics-based localization\n```\n\nThe [`wifi-densepose-ruvector`](https://crates.io/crates/wifi-densepose-ruvector) crate ([ADR-017](docs/adr/ADR-017-ruvector-signal-mat-integration.md)) connects all 7 integration points:\n\n| AI Capability | What It Replaces | RuVector Crate | Result |\n|--------------|-----------------|----------------|--------|\n| **Self-optimizing channel selection** | Hand-tuned thresholds that break when rooms change | `ruvector-mincut` | Graph min-cut adapts to any environment automatically |\n| **Attention-based signal cleaning** | Fixed energy cutoffs that miss subtle breathing | `ruvector-attn-mincut` | Learned gating amplifies body signals, suppresses noise |\n| **Learned signal fusion** | Simple averaging where one bad channel corrupts all | `ruvector-attention` | Transformer-style attention downweights corrupted channels |\n| **Physics-informed localization** | Expensive nonlinear solvers | `ruvector-solver` | Sparse least-squares Fresnel geometry in real-time |\n| **O(1) survivor triangulation** | O(N^3) matrix inversion | `ruvector-solver` | Neumann series linearization for instant position updates |\n| **75% memory compression** | 13.4 MB breathing buffers that overflow edge devices | `ruvector-temporal-tensor` | Tiered 3-8 bit quantization fits 60s of vitals in 3.4 MB |\n\nSee [issue #67](https://github.com/ruvnet/RuView/issues/67) for a deep dive with code examples, or [`cargo add wifi-densepose-ruvector`](https://crates.io/crates/wifi-densepose-ruvector) to use it directly.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003ca id=\"rvf-model-container\"\u003e\u003c/a\u003e\u003cstrong\u003e📦 RVF Model Container\u003c/strong\u003e — Single-file deployment with progressive loading\u003c/summary\u003e\n\nThe [RuVector Format (RVF)](https://github.com/ruvnet/ruvector/tree/main/crates/rvf) packages an entire trained model — weights, HNSW indexes, quantization codebooks, SONA adaptation deltas, and WASM inference runtime — into a single self-contained binary file. No external dependencies are needed at deployment time.\n\n**Container structure:**\n\n```\n┌──────────────────────────────────────────────────────┐\n│ RVF Container (.rvf)                                  │\n│                                                       │\n│  ┌─────────────┐  64-byte header per segment          │\n│  │ Manifest     │  Magic: 0x52564653 (\"RVFS\")         │\n│  ├─────────────┤  Type + content hash + compression   │\n│  │ Weights      │  Model parameters (f32/f16/u8)      │\n│  ├─────────────┤                                      │\n│  │ HNSW Index   │  Vector search index                │\n│  ├─────────────┤                                      │\n│  │ Quant        │  Quantization codebooks              │\n│  ├─────────────┤                                      │\n│  │ SONA Profile │  LoRA deltas + EWC++ Fisher matrix  │\n│  ├─────────────┤                                      │\n│  │ Witness      │  Ed25519 training proof              │\n│  ├─────────────┤                                      │\n│  │ Vitals Config│  Breathing/HR filter parameters     │\n│  └─────────────┘                                      │\n└──────────────────────────────────────────────────────┘\n```\n\n**Deployment targets:**\n\n| Target | Quantization | Size | Load Time | Use Case |\n|--------|-------------|------|-----------|----------|\n| **ESP32 / IoT** | int4 | ~0.7 MB | \u003c5ms (Layer A) | Presence + breathing only |\n| **Mobile / WebView** | int8 | ~6 MB | ~200ms (Layer B) | Pose estimation on phone |\n| **Browser (WASM)** | int8 | ~10 MB | ~500ms (Layer B) | In-browser demo |\n| **Field (WiFi-Mat)** | fp16 | ~62 MB | ~2s (Layer C) | Full DensePose + disaster triage |\n| **Server / Cloud** | f32 | ~50+ MB | ~3s (Layer C) | Training + full inference |\n\n| Property | Detail |\n|----------|--------|\n| **Format** | Segment-based binary, 20+ segment types, CRC32 integrity per segment |\n| **Progressive Loading** | **Layer A** (\u003c5ms): manifest + entry points → **Layer B** (100ms-1s): hot weights + adjacency → **Layer C** (seconds): full graph |\n| **Signing** | Ed25519 training proofs for verifiable provenance — chain of custody from training data to deployed model |\n| **Quantization** | Per-segment temperature-tiered: f32 (full), f16 (half), u8 (int8), int4 — with SIMD-accelerated distance computation |\n| **CLI** | `--export-rvf` (generate), `--load-rvf` (config), `--save-rvf` (persist), `--model` (inference), `--progressive` (3-layer load) |\n\n```bash\n# Export model package\n./target/release/sensing-server --export-rvf wifi-densepose-v1.rvf\n\n# Load and run with progressive loading\n./target/release/sensing-server --model wifi-densepose-v1.rvf --progressive\n\n# Export via Docker\ndocker run --rm -v $(pwd):/out ruvnet/wifi-densepose:latest --export-rvf /out/model.rvf\n```\n\nBuilt on the [rvf](https://github.com/ruvnet/ruvector/tree/main/crates/rvf) crate family (rvf-types, rvf-wire, rvf-manifest, rvf-index, rvf-quant, rvf-crypto, rvf-runtime). See [ADR-023](docs/adr/ADR-023-trained-densepose-model-ruvector-pipeline.md).\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003ca id=\"training--fine-tuning\"\u003e\u003c/a\u003e\u003cstrong\u003e🧬 Training \u0026 Fine-Tuning\u003c/strong\u003e — MM-Fi/Wi-Pose pre-training, SONA adaptation\u003c/summary\u003e\n\nThe training pipeline implements 8 phases in pure Rust (7,832 lines, zero external ML dependencies). It trains a graph transformer with cross-attention to map CSI feature matrices to 17 COCO body keypoints and DensePose UV coordinates — following the approach from the CMU \"DensePose From WiFi\" paper ([arXiv:2301.00250](https://arxiv.org/abs/2301.00250)). RuVector crates provide the core building blocks: [ruvector-attention](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-attention) for cross-attention layers, [ruvector-mincut](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-mincut) for multi-person matching, and [ruvector-temporal-tensor](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-temporal-tensor) for CSI buffer compression.\n\n**Three-tier data strategy:**\n\n| Tier | Method | Purpose | RuVector Integration |\n|------|--------|---------|---------------------|\n| **1. Pre-train** | MM-Fi + Wi-Pose public datasets | Cross-environment generalization (multi-subject, multi-room) | `ruvector-temporal-tensor` compresses CSI windows (114→56 subcarrier resampling) |\n| **2. Fine-tune** | ESP32 CSI + camera pseudo-labels | Environment-specific multipath adaptation | `ruvector-solver` for Fresnel geometry, `ruvector-attn-mincut` for subcarrier gating |\n| **3. SONA adapt** | Micro-LoRA (rank-4) + EWC++ | Continuous on-device learning without catastrophic forgetting | [SONA](https://github.com/ruvnet/ruvector/tree/main/crates/sona) architecture (Self-Optimizing Neural Architecture) |\n\n**Training pipeline components:**\n\n| Phase | Module | What It Does | RuVector Crate |\n|-------|--------|-------------|----------------|\n| 1 | `dataset.rs` (850 lines) | MM-Fi `.npy` + Wi-Pose `.mat` loaders, subcarrier resampling (114→56, 30→56), windowing | [ruvector-temporal-tensor](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-temporal-tensor) |\n| 2 | `graph_transformer.rs` (855 lines) | COCO BodyGraph (17 kp, 16 edges), AntennaGraph, multi-head CrossAttention, GCN message passing | [ruvector-attention](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-attention) |\n| 3 | `trainer.rs` (881 lines) | 6-term composite loss (MSE, CE, UV, temporal, bone, symmetry), SGD+momentum, cosine+warmup, PCK/OKS | [ruvector-mincut](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-mincut) (person matching) |\n| 4 | `sona.rs` (639 lines) | LoRA adapters (A×B delta), EWC++ Fisher regularization, EnvironmentDetector (3-sigma drift) | [sona](https://github.com/ruvnet/ruvector/tree/main/crates/sona) |\n| 5 | `sparse_inference.rs` (753 lines) | NeuronProfiler hot/cold partitioning, SparseLinear (skip cold rows), INT8/FP16 quantization | [ruvector-sparse-inference](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-sparse-inference) |\n| 6 | `rvf_pipeline.rs` (1,027 lines) | Progressive 3-layer loader, HNSW index, OverlayGraph, `RvfModelBuilder` | [ruvector-core](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-core) (HNSW) |\n| 7 | `rvf_container.rs` (914 lines) | Binary container format, 6+ segment types, CRC32 integrity | [rvf](https://github.com/ruvnet/ruvector/tree/main/crates/rvf) |\n| 8 | `main.rs` integration | `--train`, `--model`, `--progressive` CLI flags, REST endpoints | — |\n\n**SONA (Self-Optimizing Neural Architecture)** — the continuous adaptation system:\n\n| Component | What It Does | Why It Matters |\n|-----------|-------------|----------------|\n| **Micro-LoRA (rank-4)** | Trains small A×B weight deltas instead of full weights | 100x fewer parameters to update → runs on ESP32 |\n| **EWC++ (Fisher matrix)** | Penalizes changes to important weights from previous environments | Prevents catastrophic forgetting when moving between rooms |\n| **EnvironmentDetector** | Monitors CSI feature drift with 3-sigma threshold | Auto-triggers adaptation when the model is moved to a new space |\n| **Best-epoch snapshot** | Saves best validation loss weights, restores before export | Prevents shipping overfit final-epoch parameters |\n\n```bash\n# Pre-train on MM-Fi dataset\n./target/release/sensing-server --train --dataset data/ --dataset-type mmfi --epochs 100\n\n# Train and export to RVF in one step\n./target/release/sensing-server --train --dataset data/ --epochs 100 --save-rvf model.rvf\n\n# Via Docker (no toolchain needed)\ndocker run --rm -v $(pwd)/data:/data ruvnet/wifi-densepose:latest \\\n  --train --dataset /data --epochs 100 --export-rvf /data/model.rvf\n```\n\nSee [ADR-023](docs/adr/ADR-023-trained-densepose-model-ruvector-pipeline.md) · [SONA crate](https://github.com/ruvnet/ruvector/tree/main/crates/sona) · [arXiv:2301.00250](https://arxiv.org/abs/2301.00250)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003ca id=\"ruvector-crates\"\u003e\u003c/a\u003e\u003cstrong\u003e🔩 RuVector Crates\u003c/strong\u003e — 11 vendored signal intelligence crates from \u003ca href=\"https://github.com/ruvnet/ruvector\"\u003egithub.com/ruvnet/ruvector\u003c/a\u003e\u003c/summary\u003e\n\n**5 directly-used crates** (v2.0.4, declared in `Cargo.toml`, 7 integration points):\n\n| Crate | What It Does | Where It's Used in WiFi-DensePose | Source |\n|-------|-------------|-----------------------------------|--------|\n| [`ruvector-attention`](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-attention) | Scaled dot-product attention, MoE routing, sparse attention | `model.rs` (spatial attention), `bvp.rs` (sensitivity-weighted velocity profiles) | [crate](https://crates.io/crates/ruvector-attention) |\n| [`ruvector-mincut`](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-mincut) | Subpolynomial dynamic min-cut O(n^1.5 log n) | `metrics.rs` (DynamicPersonMatcher — multi-person assignment), `subcarrier_selection.rs` (sensitive/insensitive split) | [crate](https://crates.io/crates/ruvector-mincut) |\n| [`ruvector-attn-mincut`](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-attn-mincut) | Attention-gated spectrogram noise suppression | `model.rs` (antenna attention gating), `spectrogram.rs` (gate noisy time-frequency bins) | [crate](https://crates.io/crates/ruvector-attn-mincut) |\n| [`ruvector-solver`](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-solver) | Sparse Neumann series solver O(sqrt(n)) | `fresnel.rs` (TX-body-RX geometry), `triangulation.rs` (3D localization), `subcarrier.rs` (sparse interpolation 114→56) | [crate](https://crates.io/crates/ruvector-solver) |\n| [`ruvector-temporal-tensor`](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-temporal-tensor) | Tiered temporal compression (8/7/5/3-bit) | `dataset.rs` (CSI buffer compression), `breathing.rs` + `heartbeat.rs` (compressed vital sign spectrograms) | [crate](https://crates.io/crates/ruvector-temporal-tensor) |\n\n**6 additional vendored crates** (used by training pipeline and inference):\n\n| Crate | What It Does | Source |\n|-------|-------------|--------|\n| [`ruvector-core`](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-core) | VectorDB engine, HNSW index, SIMD distance functions, quantization codebooks | [crate](https://crates.io/crates/ruvector-core) |\n| [`ruvector-gnn`](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-gnn) | Graph neural network layers, graph attention, EWC-regularized training | [crate](https://crates.io/crates/ruvector-gnn) |\n| [`ruvector-graph-transformer`](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-graph-transformer) | Proof-gated graph transformer with cross-attention | [crate](https://crates.io/crates/ruvector-graph-transformer) |\n| [`ruvector-sparse-inference`](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-sparse-inference) | PowerInfer-style hot/cold neuron partitioning, skip cold rows at runtime | [crate](https://crates.io/crates/ruvector-sparse-inference) |\n| [`ruvector-nervous-system`](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-nervous-system) | PredictiveLayer, OscillatoryRouter, Hopfield associative memory | [crate](https://crates.io/crates/ruvector-nervous-system) |\n| [`ruvector-coherence`](https://github.com/ruvnet/ruvector/tree/main/crates/ruvector-coherence) | Spectral coherence monitoring, HNSW graph health, Fiedler connectivity | [crate](https://crates.io/crates/ruvector-coherence) |\n\nThe full RuVector ecosystem includes 90+ crates. See [github.com/ruvnet/ruvector](https://github.com/ruvnet/ruvector) for the complete library, and [`vendor/ruvector/`](vendor/ruvector/) for the vendored source in this project.\n\n\u003c/details\u003e\n\n---\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003e🏗️ System Architecture\u003c/strong\u003e — End-to-end data flow from CSI capture to REST/WebSocket API\u003c/summary\u003e\n\n### End-to-End Pipeline\n\n```mermaid\ngraph TB\n    subgraph HW [\"📡 Hardware Layer\"]\n        direction LR\n        R1[\"WiFi Router 1\u003cbr/\u003e\u003csmall\u003eCSI Source\u003c/small\u003e\"]\n        R2[\"WiFi Router 2\u003cbr/\u003e\u003csmall\u003eCSI Source\u003c/small\u003e\"]\n        R3[\"WiFi Router 3\u003cbr/\u003e\u003csmall\u003eCSI Source\u003c/small\u003e\"]\n        ESP[\"ESP32-S3 Mesh\u003cbr/\u003e\u003csmall\u003e20 Hz · 56 subcarriers\u003c/small\u003e\"]\n        WIN[\"Windows WiFi\u003cbr/\u003e\u003csmall\u003eRSSI scanning\u003c/small\u003e\"]\n    end\n\n    subgraph INGEST [\"⚡ Ingestion\"]\n        AGG[\"Aggregator\u003cbr/\u003e\u003csmall\u003eUDP :5005 · ADR-018 frames\u003c/small\u003e\"]\n        BRIDGE[\"Bridge\u003cbr/\u003e\u003csmall\u003eI/Q → amplitude + phase\u003c/small\u003e\"]\n    end\n\n    subgraph SIGNAL [\"🔬 Signal Processing — RuVector v2.0.4\"]\n        direction TB\n        PHASE[\"Phase Sanitization\u003cbr/\u003e\u003csmall\u003eSpotFi conjugate multiply\u003c/small\u003e\"]\n        HAMPEL[\"Hampel Filter\u003cbr/\u003e\u003csmall\u003eOutlier rejection · σ=3\u003c/small\u003e\"]\n        SUBSEL[\"Subcarrier Selection\u003cbr/\u003e\u003csmall\u003eruvector-mincut · sensitive/insensitive split\u003c/small\u003e\"]\n        SPEC[\"Spectrogram\u003cbr/\u003e\u003csmall\u003eruvector-attn-mincut · gated STFT\u003c/small\u003e\"]\n        FRESNEL[\"Fresnel Geometry\u003cbr/\u003e\u003csmall\u003eruvector-solver · TX-body-RX distance\u003c/small\u003e\"]\n        BVP[\"Body Velocity Profile\u003cbr/\u003e\u003csmall\u003eruvector-attention · weighted BVP\u003c/small\u003e\"]\n    end\n\n    subgraph ML [\"🧠 Neural Pipeline\"]\n        direction TB\n        GRAPH[\"Graph Transformer\u003cbr/\u003e\u003csmall\u003e17 COCO keypoints · 16 edges\u003c/small\u003e\"]\n        CROSS[\"Cross-Attention\u003cbr/\u003e\u003csmall\u003eCSI features → body pose\u003c/small\u003e\"]\n        SONA[\"SONA Adapter\u003cbr/\u003e\u003csmall\u003eLoRA rank-4 · EWC++\u003c/small\u003e\"]\n    end\n\n    subgraph VITAL [\"💓 Vital Signs\"]\n        direction LR\n        BREATH[\"Breathing\u003cbr/\u003e\u003csmall\u003e0.1–0.5 Hz · FFT peak\u003c/small\u003e\"]\n        HEART[\"Heart Rate\u003cbr/\u003e\u003csmall\u003e0.8–2.0 Hz · FFT peak\u003c/small\u003e\"]\n        MOTION[\"Motion Level\u003cbr/\u003e\u003csmall\u003eVariance + band power\u003c/small\u003e\"]\n    end\n\n    subgraph API [\"🌐 Output Layer\"]\n        direction LR\n        REST[\"REST API\u003cbr/\u003e\u003csmall\u003eAxum :3000 · 6 endpoints\u003c/small\u003e\"]\n        WS[\"WebSocket\u003cbr/\u003e\u003csmall\u003e:3001 · real-time stream\u003c/small\u003e\"]\n        ANALYTICS[\"Analytics\u003cbr/\u003e\u003csmall\u003eFall · Activity · START triage\u003c/small\u003e\"]\n        UI[\"Web UI\u003cbr/\u003e\u003csmall\u003eThree.js · Gaussian splats\u003c/small\u003e\"]\n    end\n\n    R1 \u0026 R2 \u0026 R3 --\u003e AGG\n    ESP --\u003e AGG\n    WIN --\u003e BRIDGE\n    AGG --\u003e BRIDGE\n    BRIDGE --\u003e PHASE\n    PHASE --\u003e HAMPEL\n    HAMPEL --\u003e SUBSEL\n    SUBSEL --\u003e SPEC\n    SPEC --\u003e FRESNEL\n    FRESNEL --\u003e BVP\n    BVP --\u003e GRAPH\n    GRAPH --\u003e CROSS\n    CROSS --\u003e SONA\n    SONA --\u003e BREATH \u0026 HEART \u0026 MOTION\n    BREATH \u0026 HEART \u0026 MOTION --\u003e REST \u0026 WS \u0026 ANALYTICS\n    WS --\u003e UI\n\n    style HW fill:#1a1a2e,stroke:#e94560,color:#eee\n    style INGEST fill:#16213e,stroke:#0f3460,color:#eee\n    style SIGNAL fill:#0f3460,stroke:#533483,color:#eee\n    style ML fill:#533483,stroke:#e94560,color:#eee\n    style VITAL fill:#2d132c,stroke:#e94560,color:#eee\n    style API fill:#1a1a2e,stroke:#0f3460,color:#eee\n```\n\n### Signal Processing Detail\n\n```mermaid\ngraph LR\n    subgraph RAW [\"Raw CSI Frame\"]\n        IQ[\"I/Q Samples\u003cbr/\u003e\u003csmall\u003e56–192 subcarriers × N antennas\u003c/small\u003e\"]\n    end\n\n    subgraph CLEAN [\"Phase Cleanup\"]\n        CONJ[\"Conjugate Multiply\u003cbr/\u003e\u003csmall\u003eRemove carrier freq offset\u003c/small\u003e\"]\n        UNWRAP[\"Phase Unwrap\u003cbr/\u003e\u003csmall\u003eRemove 2π discontinuities\u003c/small\u003e\"]\n        HAMPEL2[\"Hampel Filter\u003cbr/\u003e\u003csmall\u003eRemove impulse noise\u003c/small\u003e\"]\n    end\n\n    subgraph SELECT [\"Subcarrier Intelligence\"]\n        MINCUT[\"Min-Cut Partition\u003cbr/\u003e\u003csmall\u003eruvector-mincut\u003c/small\u003e\"]\n        GATE[\"Attention Gate\u003cbr/\u003e\u003csmall\u003eruvector-attn-mincut\u003c/small\u003e\"]\n    end\n\n    subgraph EXTRACT [\"Feature Extraction\"]\n        STFT[\"STFT Spectrogram\u003cbr/\u003e\u003csmall\u003eTime-frequency decomposition\u003c/small\u003e\"]\n        FRESNELZ[\"Fresnel Zones\u003cbr/\u003e\u003csmall\u003eruvector-solver\u003c/small\u003e\"]\n        BVPE[\"BVP Estimation\u003cbr/\u003e\u003csmall\u003eruvector-attention\u003c/small\u003e\"]\n    end\n\n    subgraph OUT [\"Output Features\"]\n        AMP[\"Amplitude Matrix\"]\n        PHASE2[\"Phase Matrix\"]\n        DOPPLER[\"Doppler Shifts\"]\n        VITALS[\"Vital Band Power\"]\n    end\n\n    IQ --\u003e CONJ --\u003e UNWRAP --\u003e HAMPEL2\n    HAMPEL2 --\u003e MINCUT --\u003e GATE\n    GATE --\u003e STFT --\u003e FRESNELZ --\u003e BVPE\n    BVPE --\u003e AMP \u0026 PHASE2 \u0026 DOPPLER \u0026 VITALS\n\n    style RAW fill:#0d1117,stroke:#58a6ff,color:#c9d1d9\n    style CLEAN fill:#161b22,stroke:#58a6ff,color:#c9d1d9\n    style SELECT fill:#161b22,stroke:#d29922,color:#c9d1d9\n    style EXTRACT fill:#161b22,stroke:#3fb950,color:#c9d1d9\n    style OUT fill:#0d1117,stroke:#8b949e,color:#c9d1d9\n```\n\n### Deployment Topology\n\n```mermaid\ngraph TB\n    subgraph EDGE [\"Edge (ESP32-S3 Mesh)\"]\n        E1[\"Node 1\u003cbr/\u003e\u003csmall\u003eKitchen\u003c/small\u003e\"]\n        E2[\"Node 2\u003cbr/\u003e\u003csmall\u003eLiving room\u003c/small\u003e\"]\n        E3[\"Node 3\u003cbr/\u003e\u003csmall\u003eBedroom\u003c/small\u003e\"]\n    end\n\n    subgraph SERVER [\"Server (Rust · 132 MB Docker)\"]\n        SENSE[\"Sensing Server\u003cbr/\u003e\u003csmall\u003e:3000 REST · :3001 WS · :5005 UDP\u003c/small\u003e\"]\n        RVF[\"RVF Model\u003cbr/\u003e\u003csmall\u003eProgressive 3-layer load\u003c/small\u003e\"]\n        STORE[\"Time-Series Store\u003cbr/\u003e\u003csmall\u003eIn-memory ring buffer\u003c/small\u003e\"]\n    end\n\n    subgraph CLIENT [\"Clients\"]\n        BROWSER[\"Browser\u003cbr/\u003e\u003csmall\u003eThree.js UI · Gaussian splats\u003c/small\u003e\"]\n        MOBILE[\"Mobile App\u003cbr/\u003e\u003csmall\u003eWebSocket stream\u003c/small\u003e\"]\n        DASH[\"Dashboard\u003cbr/\u003e\u003csmall\u003eREST polling\u003c/small\u003e\"]\n        IOT[\"Home Automation\u003cbr/\u003e\u003csmall\u003eMQTT bridge\u003c/small\u003e\"]\n    end\n\n    E1 --\u003e|\"UDP :5005\u003cbr/\u003eADR-018 frames\"| SENSE\n    E2 --\u003e|\"UDP :5005\"| SENSE\n    E3 --\u003e|\"UDP :5005\"| SENSE\n    SENSE \u003c--\u003e RVF\n    SENSE \u003c--\u003e STORE\n    SENSE --\u003e|\"WS :3001\u003cbr/\u003ereal-time JSON\"| BROWSER \u0026 MOBILE\n    SENSE --\u003e|\"REST :3000\u003cbr/\u003eon-demand\"| DASH \u0026 IOT\n\n    style EDGE fill:#1a1a2e,stroke:#e94560,color:#eee\n    style SERVER fill:#16213e,stroke:#533483,color:#eee\n    style CLIENT fill:#0f3460,stroke:#0f3460,color:#eee\n```\n\n| Component | Crate / Module | Description |\n|-----------|---------------|-------------|\n| **Aggregator** | `wifi-densepose-hardware` | ESP32 UDP listener, ADR-018 frame parser, I/Q → amplitude/phase bridge |\n| **Signal Processor** | `wifi-densepose-signal` | SpotFi phase sanitization, Hampel filter, STFT spectrogram, Fresnel geometry, BVP |\n| **Subcarrier Selection** | `ruvector-mincut` + `ruvector-attn-mincut` | Dynamic sensitive/insensitive partitioning, attention-gated noise suppression |\n| **Fresnel Solver** | `ruvector-solver` | Sparse Neumann series O(sqrt(n)) for TX-body-RX distance estimation |\n| **Graph Transformer** | `wifi-densepose-train` | COCO BodyGraph (17 kp, 16 edges), cross-attention CSI→pose, GCN message passing |\n| **SONA** | `sona` crate | Micro-LoRA (rank-4) adaptation, EWC++ catastrophic forgetting prevention |\n| **Vital Signs** | `wifi-densepose-signal` | FFT-based breathing (0.1-0.5 Hz) and heartbeat (0.8-2.0 Hz) extraction |\n| **REST API** | `wifi-densepose-sensing-server` | Axum server: `/api/v1/sensing`, `/health`, `/vital-signs`, `/bssid`, `/sona` |\n| **WebSocket** | `wifi-densepose-sensing-server` | Real-time pose, sensing, and vital sign streaming on `:3001` |\n| **Analytics** | `wifi-densepose-mat` | Fall detection, activity recognition, START triage (WiFi-Mat disaster module) |\n| **Web UI** | `ui/` | Three.js scene, Gaussian splat visualization, signal dashboard |\n\n\u003c/details\u003e\n\n---\n\n## 🖥️ CLI Usage\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eRust Sensing Server\u003c/strong\u003e — Primary CLI interface\u003c/summary\u003e\n\n```bash\n# Start with simulated data (no hardware)\n./target/release/sensing-server --source simulate --ui-path ../../ui\n\n# Start with ESP32 CSI hardware\n./target/release/sensing-server --source esp32 --udp-port 5005\n\n# Start with Windows WiFi RSSI\n./target/release/sensing-server --source wifi\n\n# Run vital sign benchmark\n./target/release/sensing-server --benchmark\n\n# Export RVF model package\n./target/release/sensing-server --export-rvf model.rvf\n\n# Train a model\n./target/release/sensing-server --train --dataset data/ --epochs 100\n\n# Load trained model with progressive loading\n./target/release/sensing-server --model wifi-densepose-v1.rvf --progressive\n```\n\n| Flag | Description |\n|------|-------------|\n| `--source` | Data source: `auto`, `wifi`, `esp32`, `simulate` |\n| `--http-port` | HTTP port for UI and REST API (default: 8080) |\n| `--ws-port` | WebSocket port (default: 8765) |\n| `--udp-port` | UDP port for ESP32 CSI frames (default: 5005) |\n| `--benchmark` | Run vital sign benchmark (1000 frames) and exit |\n| `--export-rvf` | Export RVF container package and exit |\n| `--load-rvf` | Load model config from RVF container |\n| `--save-rvf` | Save model state on shutdown |\n| `--model` | Load trained `.rvf` model for inference |\n| `--progressive` | Enable progressive loading (Layer A instant start) |\n| `--train` | Train a model and exit |\n| `--dataset` | Path to dataset directory (MM-Fi or Wi-Pose) |\n| `--epochs` | Training epochs (default: 100) |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003ca id=\"rest-api--websocket\"\u003e\u003c/a\u003e\u003cstrong\u003eREST API \u0026 WebSocket\u003c/strong\u003e — Endpoints reference\u003c/summary\u003e\n\n#### REST API (Rust Sensing Server)\n\n```bash\nGET  /api/v1/sensing              # Latest sensing frame\nGET  /api/v1/vital-signs          # Breathing, heart rate, confidence\nGET  /api/v1/bssid                # Multi-BSSID registry\nGET  /api/v1/model/layers         # Progressive loading status\nGET  /api/v1/model/sona/profiles  # SONA profiles\nPOST /api/v1/model/sona/activate  # Activate SONA profile\n```\n\nWebSocket: `ws://localhost:3001/ws/sensing` (real-time sensing + vital signs)\n\n\u003e Default ports (Docker): HTTP 3000, WS 3001. Binary defaults: HTTP 8080, WS 8765. Override with `--http-port` / `--ws-port`.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003ca id=\"hardware-support-1\"\u003e\u003c/a\u003e\u003cstrong\u003eHardware Support\u003c/strong\u003e — Devices, cost, and guides\u003c/summary\u003e\n\n| Hardware | CSI | Cost | Guide |\n|----------|-----|------|-------|\n| **ESP32-S3** | Native | ~$8 | [Tutorial #34](https://github.com/ruvnet/RuView/issues/34) |\n| Intel 5300 | Firmware mod | ~$15 | Linux `iwl-csi` |\n| Atheros AR9580 | ath9k patch | ~$20 | Linux only |\n| Any Windows WiFi | RSSI only | $0 | [Tutorial #36](https://github.com/ruvnet/RuView/issues/36) |\n| Any macOS WiFi | RSSI only (CoreWLAN) | $0 | [ADR-025](docs/adr/ADR-025-macos-corewlan-wifi-sensing.md) |\n| Any Linux WiFi | RSSI only (`iw`) | $0 | Requires `iw` + `CAP_NET_ADMIN` |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003ePython Legacy CLI\u003c/strong\u003e — v1 API server commands\u003c/summary\u003e\n\n```bash\nwifi-densepose start                    # Start API server\nwifi-densepose -c config.yaml start     # Custom config\nwifi-densepose -v start                 # Verbose logging\nwifi-densepose status                   # Check status\nwifi-densepose stop                     # Stop server\nwifi-densepose config show              # Show configuration\nwifi-densepose db init                  # Initialize database\nwifi-densepose tasks list               # List background tasks\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eDocumentation Links\u003c/strong\u003e\u003c/summary\u003e\n\n- [WiFi-Mat User Guide](docs/wifi-mat-user-guide.md) | [Domain Model](docs/ddd/wifi-mat-domain-model.md)\n- [ADR-021](docs/adr/ADR-021-vital-sign-detection-rvdna-pipeline.md) | [ADR-022](docs/adr/ADR-022-windows-wifi-enhanced-fidelity-ruvector.md) | [ADR-023](docs/adr/ADR-023-trained-densepose-model-ruvector-pipeline.md)\n\n\u003c/details\u003e\n\n---\n\n## 🧪 Testing\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003e542+ tests across 7 suites\u003c/strong\u003e — zero mocks, hardware-free simulation\u003c/summary\u003e\n\n```bash\n# Rust tests (primary — 542+ tests)\ncd rust-port/wifi-densepose-rs\ncargo test --workspace\n\n# Sensing server tests (229 tests)\ncargo test -p wifi-densepose-sensing-server\n\n# Vital sign benchmark\n./target/release/sensing-server --benchmark\n\n# Python tests\npython -m pytest v1/tests/ -v\n\n# Pipeline verification (no hardware needed)\n./verify\n```\n\n| Suite | Tests | What It Covers |\n|-------|-------|----------------|\n| sensing-server lib | 147 | Graph transformer, trainer, SONA, sparse inference, RVF |\n| sensing-server bin | 48 | CLI integration, WebSocket, REST API |\n| RVF integration | 16 | Container build, read, progressive load |\n| Vital signs integration | 18 | FFT detection, breathing, heartbeat |\n| wifi-densepose-signal | 83 | SOTA algorithms, Doppler, Fresnel |\n| wifi-densepose-mat | 139 | Disaster response, triage, localization |\n| wifi-densepose-wifiscan | 91 | 8-stage RSSI pipeline |\n\n\u003c/details\u003e\n\n---\n\n## 🚀 Deployment\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eDocker deployment\u003c/strong\u003e — Production setup with docker-compose\u003c/summary\u003e\n\n```bash\n# Rust sensing server (132 MB)\ndocker pull ruvnet/wifi-densepose:latest\ndocker run -p 3000:3000 -p 3001:3001 -p 5005:5005/udp ruvnet/wifi-densepose:latest\n\n# Python pipeline (569 MB)\ndocker pull ruvnet/wifi-densepose:python\ndocker run -p 8765:8765 -p 8080:8080 ruvnet/wifi-densepose:python\n\n# Both via docker-compose\ncd docker \u0026\u0026 docker compose up\n\n# Export RVF model\ndocker run --rm -v $(pwd):/out ruvnet/wifi-densepose:latest --export-rvf /out/model.rvf\n```\n\n### Environment Variables\n\n```bash\nRUST_LOG=info                    # Logging level\nWIFI_INTERFACE=wlan0             # WiFi interface for RSSI\nPOSE_CONFIDENCE_THRESHOLD=0.7    # Minimum confidence\nPOSE_MAX_PERSONS=10              # Max tracked individuals\n```\n\n\u003c/details\u003e\n\n---\n\n## 📊 Performance Metrics\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eMeasured benchmarks\u003c/strong\u003e — Rust sensing server, validated via cargo bench\u003c/summary\u003e\n\n### Rust Sensing Server\n\n| Metric | Value |\n|--------|-------|\n| Vital sign detection | **11,665 fps** (86 µs/frame) |\n| Full CSI pipeline | **54,000 fps** (18.47 µs/frame) |\n| Motion detection | **186 ns** (~5,400x vs Python) |\n| Docker image | 132 MB |\n| Memory usage | ~100 MB |\n| Test count | 542+ |\n\n### Python vs Rust\n\n| Operation | Python | Rust | Speedup |\n|-----------|--------|------|---------|\n| CSI Preprocessing | ~5 ms | 5.19 µs | 1000x |\n| Phase Sanitization | ~3 ms | 3.84 µs | 780x |\n| Feature Extraction | ~8 ms | 9.03 µs | 890x |\n| Motion Detection | ~1 ms | 186 ns | 5400x |\n| **Full Pipeline** | ~15 ms | 18.47 µs | **810x** |\n\n\u003c/details\u003e\n\n---\n\n## 🤝 Contributing\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eDev setup, code standards, PR process\u003c/strong\u003e\u003c/summary\u003e\n\n```bash\ngit clone https://github.com/ruvnet/RuView.git\ncd RuView\n\n# Rust development\ncd rust-port/wifi-densepose-rs\ncargo build --release\ncargo test --workspace\n\n# Python development\npython -m venv venv \u0026\u0026 source venv/bin/activate\npip install -r requirements-dev.txt \u0026\u0026 pip install -e .\npre-commit install\n```\n\n1. **Fork** the repository\n2. **Create** a feature branch (`git checkout -b feature/amazing-feature`)\n3. **Commit** your changes\n4. **Push** and open a Pull Request\n\n\u003c/details\u003e\n\n---\n\n## 📄 Changelog\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eRelease history\u003c/strong\u003e\u003c/summary\u003e\n\n### v3.2.0 — 2026-03-03\n\nEdge intelligence: 24 hot-loadable WASM modules for on-device CSI processing on ESP32-S3.\n\n- **ADR-041 Edge Intelligence Modules** — 24 `no_std` Rust modules compiled to `wasm32-unknown-unknown`, loaded via WASM3 on ESP32; 8 categories covering signal intelligence, adaptive learning, spatial reasoning, temporal analysis, AI security, quantum-inspired, autonomous systems, and exotic algorithms\n- **Vendor Integration** — Algorithms ported from `midstream` (DTW, attractors, Flash Attention, min-cut, optimal transport) and `sublinear-time-solver` (PageRank, HNSW, sparse recovery, spiking NN)\n- **On-device gesture learning** — User-teachable DTW gesture recognition with 3-rehearsal protocol and 16 template slots\n- **Lifelong learning (EWC++)** — Elastic Weight Consolidation prevents catastrophic forgetting when learning new tasks\n- **AI security modules** — FNV-1a replay detection, injection/jamming detection, 6D behavioral anomaly profiling with Mahalanobis scoring\n- **Self-healing mesh** — 8-node mesh with health tracking, degradation/recovery hysteresis, and coverage redistribution\n- **Common utility library** — `vendor_common.rs` shared across all 24 modules: CircularBuffer, EMA, WelfordStats, DTW, FixedPriorityQueue, vector math\n- **243 tests passing** — All modules include comprehensive inline tests; 0 failures\n- **Security audit** — 15 findings addressed (1 critical, 3 high, 6 medium, 5 low)\n\n### v3.1.0 — 2026-03-02\n\nMultistatic sensing, persistent field model, and cross-viewpoint fusion — the biggest capability jump since v2.0.\n\n- **Project RuvSense (ADR-029)** — Multistatic mesh: TDM protocol, channel hopping (ch1/6/11), multi-band frame fusion, coherence gating, 17-keypoint Kalman tracker with re-ID; 10 new signal modules (5,300+ lines)\n- **RuvSense Persistent Field Model (ADR-030)** — 7 exotic sensing tiers: field normal modes (SVD), RF tomography, longitudinal drift detection, intention prediction, cross-room identity, gesture classification, adversarial detection\n- **Project RuView (ADR-031)** — Cross-viewpoint attention with geometric bias, Geometric Diversity Index, viewpoint fusion orchestrator; 5 new ruvector modules (2,200+ lines)\n- **TDM Hardware Protocol** — ESP32 sensing coordinator: sync beacons, slot scheduling, clock drift compensation (±10ppm), 20 Hz aggregate rate\n- **Channel-Hopping Firmware** — ESP32 firmware extended with hop table, timer-driven channel switching, NDP injection stub; NVS config for all TDM parameters; fully backward-compatible\n- **DDD Domain Model** — 6 bounded contexts, ubiquitous language, aggregate roots, domain events, full event bus specification\n- **`ruvector-crv` 6-stage CRV signal-line integration (ADR-033)** — Maps Coordinate Remote Viewing methodology to WiFi CSI: gestalt classification, sensory encoding, GNN topology, SNN coherence gating, differentiable search, MinCut partitioning; cross-session convergence for multi-room identity continuity\n- **ADR-032 multistatic mesh security hardening** — HMAC-SHA256 beacon auth, SipHash-2-4 frame integrity, NDP rate limiter, coherence gate timeout, bounded buffers, NVS credential zeroing, atomic firmware state\n- **ADR-032a QUIC transport layer** — `midstreamer-quic` TLS 1.3 AEAD for aggregator nodes, dual-mode security (ManualCrypto/QuicTransport), QUIC stream mapping, connection migration, congestion control\n- **ADR-033 CRV signal-line sensing integration** — Architecture decision record for the 6-stage CRV pipeline mapping to ruvector components\n- **Temporal gesture matching** — `midstreamer-temporal-compare` DTW/LCS/edit-distance gesture classification with quantized feature comparison\n- **Attractor drift analysis** — `midstreamer-attractor` Takens' theorem phase-space embedding with Lyapunov exponent regime detection (Stable/Periodic/Chaotic)\n- **v0.3.0 published** — All 15 workspace crates published to [crates.io](https://crates.io/crates/wifi-densepose-core) with updated dependencies\n- **28,000+ lines of new Rust code** across 26 modules with 400+ tests\n- **Security hardened** — Bounded buffers, NaN guards, no panics in public APIs, input validation at all boundaries\n\n### v3.0.0 — 2026-03-01\n\nMajor release: AETHER contrastive embedding model, AI signal processing backbone, cross-platform adapters, Docker Hub images, and comprehensive README overhaul.\n\n- **Project AETHER (ADR-024)** — Self-supervised contrastive learning for WiFi CSI fingerprinting, similarity search, and anomaly detection; 55 KB model fits on ESP32\n- **AI Backbone (`wifi-densepose-ruvector`)** — 7 RuVector integration points replacing hand-tuned thresholds with attention, graph algorithms, and smart compression; [published to crates.io](https://crates.io/crates/wifi-densepose-ruvector)\n- **Cross-platform RSSI adapters** — macOS CoreWLAN and Linux `iw` Rust adapters with `#[cfg(target_os)]` gating (ADR-025)\n- **Docker images published** — `ruvnet/wifi-densepose:latest` (132 MB Rust) and `:python` (569 MB)\n- **Project MERIDIAN (ADR-027)** — Cross-environment domain generalization: gradient reversal, geometry-conditioned FiLM, virtual domain augmentation, contrastive test-time training; zero-shot room transfer\n- **10-phase DensePose training pipeline (ADR-023/027)** — Graph transformer, 6-term composite loss, SONA adaptation, RVF packaging, hardware normalization, domain-adversarial training\n- **Vital sign detection (ADR-021)** — FFT-based breathing (6-30 BPM) and heartbeat (40-120 BPM), 11,665 fps\n- **WiFi scan domain layer (ADR-022/025)** — 8-stage signal intelligence pipeline for Windows, macOS, and Linux\n- **700+ Rust tests** — All passing, zero mocks\n\n### v2.0.0 — 2026-02-28\n\nComplete Rust sensing server, SOTA signal processing, WiFi-Mat disaster response, ESP32 hardware, RuVector integration, guided installer, and security hardening.\n\n- **Rust sensing server** — Axum REST API + WebSocket, 810x speedup over Python, 54K fps pipeline\n- **RuVector integration** — 11 vendored crates for HNSW, attention, GNN, temporal compression, min-cut, solver\n- **6 SOTA signal algorithms (ADR-014)** — SpotFi, Hampel, Fresnel, spectrogram, subcarrier selection, BVP\n- **WiFi-Mat disaster response** — START triage, 3D localization, priority alerts — 139 tests\n- **ESP32 CSI hardware** — Binary frame parsing, $54 starter kit, 20 Hz streaming\n- **Guided installer** — 7-step hardware detection, 8 install profiles\n- **Three.js visualization** — 3D body model, 17 joints, real-time WebSocket\n- **Security hardening** — 10 vulnerabilities fixed\n\n\u003c/details\u003e\n\n---\n\n## 📄 License\n\nMIT License — see [LICENSE](LICENSE) for details.\n\n## 📞 Support\n\n[GitHub Issues](https://github.com/ruvnet/RuView/issues) | [Discussions](https://github.com/ruvnet/RuView/discussions) | [PyPI](https://pypi.org/project/wifi-densepose/)\n\n---\n\n**WiFi DensePose** — Privacy-preserving human pose estimation through WiFi signals.\n","funding_links":[],"categories":["HarmonyOS","Applications","🤖 AI \u0026 Machine Learning","Rust","monitoring"],"sub_categories":["Windows Manager","Utilities"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fruvnet%2FRuView","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fruvnet%2FRuView","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fruvnet%2FRuView/lists"}