{"id":47923677,"url":"https://github.com/ghashtag/zig-golden-float","last_synced_at":"2026-04-06T08:00:35.153Z","repository":{"id":348020035,"uuid":"1196122726","full_name":"gHashTag/zig-golden-float","owner":"gHashTag","description":"GoldenFloat16: φ-optimized ML number formats for Zig","archived":false,"fork":false,"pushed_at":"2026-04-02T16:37:28.000Z","size":5615,"stargazers_count":2,"open_issues_count":2,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-04T06:36:14.802Z","etag":null,"topics":["c","float","float32","float64","floating-point","rust","rust-lang","rust-library","rustlang","zig","zig-lib","zig-library","zig-package","ziglang"],"latest_commit_sha":null,"homepage":"https://t27.ai/","language":"Zig","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/gHashTag.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-03-30T11:50:25.000Z","updated_at":"2026-04-02T16:37:33.000Z","dependencies_parsed_at":null,"dependency_job_id":"b8f80028-97f4-450d-bdcb-ee38b7ce2b8a","html_url":"https://github.com/gHashTag/zig-golden-float","commit_stats":null,"previous_names":["ghashtag/zig-golden-float"],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/gHashTag/zig-golden-float","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/gHashTag%2Fzig-golden-float","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/gHashTag%2Fzig-golden-float/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/gHashTag%2Fzig-golden-float/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/gHashTag%2Fzig-golden-float/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/gHashTag","download_url":"https://codeload.github.com/gHashTag/zig-golden-float/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/gHashTag%2Fzig-golden-float/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31427386,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-05T02:22:46.605Z","status":"ssl_error","status_checked_at":"2026-04-05T02:22:33.263Z","response_time":75,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["c","float","float32","float64","floating-point","rust","rust-lang","rust-library","rustlang","zig","zig-lib","zig-library","zig-package","ziglang"],"created_at":"2026-04-04T06:22:22.713Z","updated_at":"2026-04-05T07:01:11.665Z","avatar_url":"https://github.com/gHashTag.png","language":"Zig","readme":"\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://maas-log-prod.cn-wlcb.ufileos.com/anthropic/9abb931a-09e2-47f1-b604-85eb9b561805/52f3aea9c58e791b8438dfaa4e33281f.jpg?UCloudPublicKey=TOKEN_e15ba47a-d098-4fbd-9afc-a0dcf0e4e621\u0026Expires=1774961194\u0026Signature=HpSLonNyhvxJewblKFKHIDlKjxI=\" alt=\"GoldenFloat\"\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://github.com/gHashTag/zig-golden-float\"\u003e\n    \u003cimg src=\"https://img.shields.io/github/v/release/gHashTag/zig-golden-float?label=Download\u0026style=for-the-badge\" alt=\"Download\"\u003e\n  \u003c/a\u003e\n\u003c/p\u003e\n\n\u003ch1 align=\"center\"\u003eGoldenFloat — φ-Optimized Zig Kernel for ML\u003c/h1\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cstrong\u003e6-bit exponent, 9-bit mantissa\u003c/strong\u003e — Scientifically grounded number format\u003cbr\u003e\n  \u003ccode\u003epacked struct(u16)\u003c/code\u003e — Stable C-ABI for Rust, Python, Node.js, Go\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"#scientific-comparison-bench-001\"\u003e📊 Scientific Comparison\u003c/a\u003e \u0026bull;\n  \u003ca href=\"https://github.com/gHashTag/zig-golden-float/blob/main/docs/whitepaper/gf16_comparison.md\"\u003e📄 Whitepaper\u003c/a\u003e \u0026bull;\n  \u003ca href=\"#c-abi--cross-language-bindings-v110\"\u003e🌍 Multi-Language\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cblockquote\u003e\n\n**BENCH-001 Result:** GF16 achieves φ-distance 0.049 — the best among 16-bit formats. See [whitepaper](docs/whitepaper/gf16_comparison.md) for full comparison with IEEE fp16, bfloat16, and DLFloat-6:9.\n\n**Key Finding:** GF16 independently converges on IBM's DLFloat-6:9 bit layout (6:9 exp:mant split), validated by φ-distance metric.\n\n\u003c/blockquote\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"#-zig-pain-points-we-solve\"\u003ePain Points\u003c/a\u003e \u0026bull;\n  \u003ca href=\"#-platform-kill-zone\"\u003eKill Zone\u003c/a\u003e \u0026bull;\n  \u003ca href=\"#-one-architectural-decision\"\u003eArchitecture\u003c/a\u003e \u0026bull;\n  \u003ca href=\"#-why-not-wait-for-zig-10\"\u003eWhy Wait?\u003c/a\u003e \u0026bull;\n  \u003ca href=\"#-migration-guide\"\u003eMigrate\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://img.shields.io/badge/Zig-0.15.x-F7A41D?style=flat-square\u0026logo=zig\" alt=\"Zig 0.15.x\"\u003e\n  \u003cimg src=\"https://img.shields.io/badge/Zig_Bugs_Bypassed-62-red?style=flat-square\" alt=\"62 Bugs Bypassed\"\u003e\n  \u003cimg src=\"https://img.shields.io/badge/Urgent_Issues-21_avoided-orange?style=flat-square\" alt=\"21 Urgent Avoided\"\u003e\n  \u003cimg src=\"https://img.shields.io/badge/Core_Team_Issues-11-blue?style=flat-square\" alt=\"11 Core Team Issues\"\u003e\n  \u003cimg src=\"https://img.shields.io/badge/Platforms-Affected-13-purple?style=flat-square\" alt=\"13 Platforms\"\u003e\n  \u003cimg src=\"https://img.shields.io/badge/License-MIT-blue?style=flat-square\" alt=\"MIT License\"\u003e\n  \u003ca href=\"https://github.com/gHashTag/zig-golden-float/stargazers\"\u003e\u003cimg src=\"https://img.shields.io/github/stars/gHashTag/zig-golden-float?style=flat-square\" alt=\"Stars\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n\u003e **Note:** GoldenFloat is a practical workaround for Zig's f16 issues — not a replacement. Works today on Zig 0.15.x while the core team actively works on fixes.\n\n---\n\n## 🔥 Zig Pain Points We Solve\n\n\u003e **62 real open issues** in Zig compiler that affect ML/numeric developers.\n\u003e ([Codeberg](https://codeberg.org/ziglang/zig/issues) + [GitHub](https://github.com/ziglang/zig/issues))\n\u003e GoldenFloat bypasses them ALL.\n\n\u003e 📅 **Last checked:** March 31, 2026\n\u003e 🔴 **21/62 issues** marked **Urgent** by Zig core team\n\u003e 👑 **11/62 issues** filed by **Zig core developers** (andrewrk, mlugg, alexrp, kcbanner)\n\u003e 🆕 **3 issues opened 2 days ago** (mlugg: LLVM WASM crashes)\n\u003e 📍 **Source:** [codeberg.org/ziglang/zig](https://codeberg.org/ziglang/zig/issues)\n\n```\n════════════════════════════════════════════════════════════════════\n               62 OPEN ZIG BUGS\n              ╱        |        ╲\n         f16 type    LLVM     platform\n         (8 bugs)   (9 bugs)  (20 bugs)\n              ╲        |        ╱\n               packed struct(u16)\n                    = 0 BUGS\n\n    One type. One decision. 62 bugs bypassed.\n════════════════════════════════════════════════════════════════════\n```\n\n---\n\n## 📅 Issue Freshness Dashboard\n\n```\n📅 March 31, 2026\n\n  Last 2 days:   3 new Urgent (mlugg — LLVM crashes)\n  Last 7 days:  11 new issues\n  Last month:   21 Urgent still open\n  Total open:   62 issues affecting numeric/ML\n\n  Zig core team filed 11 of these themselves — well documented.\n  GoldenFloat works today as a practical bridge while fixes land.\n```\n\n### A. Float Performance \u0026 Correctness (8 issues, 4 Urgent)\n\n| # | Pain Point | Issue | By | Status | GF16 Fix |\n|---|------------|-------|-----|--------|-----------|\n| 1 | f16 = 2,304 SIMD inst/loop | [gh#19550](https://github.com/ziglang/zig/issues/19550) | community | Open (2 yr!) | GF16 = 56 inst (40×) |\n| 2 | std.Random no f16 | [gh#23518](https://github.com/ziglang/zig/issues/23518) | community | Open | `GF16.fromF32(random.float(f32))` |\n| 3 | std.math.big.int.setFloat panics | [cb#30234](https://codeberg.org/zig/zig/issues/30234) | community | Open | HybridBigInt — no panics |\n| 4 | **@round/@trunc/@ceil rework** | [cb#31602](https://codeberg.org/zig/zig/issues/31602) | **andrewrk** 🔴 | Open | GF16 own rounding |\n| 5 | libc pow() changed | [cb#31207](https://codeberg.org/zig/zig/issues/31207) | sinon | Open | comptime constants, no libc |\n| 6 | **NaN encoding MIPS broken** | [cb#31325](https://codeberg.org/zig/zig/issues/31325) | **alexrp** 🔴 | Urgent | u16 = NaN-free |\n| 7 | **compiler_rt fails math tests** | [cb#30659](https://codeberg.org/zig/zig/issues/30659) | mercenary 🔴 | Urgent | bypass compiler_rt |\n| 8 | **x86 miscompiles i64 × -1** | [cb#31046](https://codeberg.org/zig/zig/issues/31046) | community 🔴 | Urgent | Ternary = u2, not i64 |\n\n### B. Packed Struct / Custom Types (8 issues, 2 Urgent)\n\n| # | Pain Point | Issue | By | Status | GF16 Fix |\n|---|------------|-------|-----|--------|-----------|\n| 9 | @Vector packed → wrong values | [cb#30233](https://codeberg.org/zig/zig/issues/30233) | community | Open | no @Vector in packed |\n| 10 | **@Vector + struct → LLVM crash** | [cb#31629](https://codeberg.org/zig/zig/issues/31629) | sstochi 🔴 | Urgent | simple packed(u16) |\n| 11 | defaultValue incorrect | [cb#30145](https://codeberg.org/zig/zig/issues/30145) | community | Open | init via fromF32() |\n| 12 | 0-sized field → crash | [cb#31633](https://codeberg.org/zig/zig/issues/31633) | community | Open | exactly 16 bits |\n| 13 | ZON import packed → crash | [cb#31570](https://codeberg.org/zig/zig/issues/31570) | community | Open | created by code |\n| 14 | Langref vague packed+vectors | [cb#30185](https://codeberg.org/zig/zig/issues/30185) | community | Open | unambiguous packed(u16) |\n| 15 | LLVM non-byte-sized loads | [cb#31346](https://codeberg.org/zig/zig/issues/31346) | **andrewrk** | Open | byte-aligned u16 |\n| 16 | **Pointer offsets comptime broken** | [cb#31603](https://codeberg.org/zig/zig/issues/31603) | adrian4096 🔴 | Urgent | runtime-only struct |\n\n### C. SIMD \u0026 Vectorization (5 issues, 1 Urgent)\n\n| # | Pain Point | Issue | By | Status | GF16 Fix |\n|---|------------|-------|-----|--------|-----------|\n| 17 | Vector concat → error | [cb#30586](https://codeberg.org/zig/zig/issues/30586) | community | Open | [N]u16 arrays |\n| 18 | Bitshift @Vector → LLVM crash | [cb#31116](https://codeberg.org/zig/zig/issues/31116) | community | Open | HybridBigInt ops |\n| 19 | Vector compare → wrong type | [cb#30908](https://codeberg.org/zig/zig/issues/30908) | community | Open | scalar f32 result |\n| 20 | **findSentinel SIMD provenance** | [cb#31630](https://codeberg.org/zig/zig/issues/31630) | **andrewrk** 🔴 | Urgent | cosine search |\n| 21 | evex512 ABI без feature | [cb#30907](https://codeberg.org/zig/zig/issues/30907) | community | Open | no AVX-512 |\n\n### D. LLVM Backend (6 issues, 4 Urgent)\n\n| # | Pain Point | Issue | By | Status | GF16 Fix |\n|---|------------|-------|-----|--------|-----------|\n| 22 | **LLVM assertion Debug compiler-rt** | [cb#31702](https://codeberg.org/zig/zig/issues/31702) | **mlugg** 🔴 | Urgent (2d!) | no float intrinsics |\n| 23 | **LLVM -fno-builtin fails WASM** | [cb#31703](https://codeberg.org/zig/zig/issues/31703) | **mlugg** 🔴 | Urgent (2d!) | u16 on WASM |\n| 24 | **Atomic packed unions broken** | [cb#31103](https://codeberg.org/zig/zig/issues/31103) | community 🔴 | Urgent | @atomicRmw u16 |\n| 25 | **Large var=undefined → LLVM assert** | [cb#31701](https://codeberg.org/zig/zig/issues/31701) | **mlugg** 🔴 | Urgent (2d!) | 16 bits, explicit init |\n| 26 | LLVM vs local = different results | [cb#31366](https://codeberg.org/zig/zig/issues/31366) | santy | Open | u16 bitwise = identical |\n| 27 | C backend MSVC layout wrong | [cb#31576](https://codeberg.org/zig/zig/issues/31576) | **kcbanner** | Upcoming | packed(u16) = same everywhere |\n\n### E. Memory \u0026 Concurrency (3 issues, 1 Urgent)\n\n| # | Pain Point | Issue | By | Status | GF16 Fix |\n|---|------------|-------|-----|--------|-----------|\n| 28 | **ArenaAllocator thread-safety** | [cb#31186](https://codeberg.org/zig/zig/issues/31186) | community 🔴 | Urgent | vsa_concurrency lock-free |\n| 29 | comptime allocation → segfault | [cb#30711](https://codeberg.org/zig/zig/issues/30711) | rob9315 | Open | comptime literals |\n| 30 | @atomicRmw result location type | [cb#31569](https://codeberg.org/zig/zig/issues/31569) | andrewraevskii | Open | explicit u16 cast |\n\n### F. stdlib Math \u0026 Parsing (3 issues)\n\n| # | Pain Point | Issue | By | Status | GF16 Fix |\n|---|------------|-------|-----|--------|-----------|\n| 31 | Too many parsing implementations | [cb#30881](https://codeberg.org/zig/zig/issues/30881) | rpkak | Upcoming | HybridBigInt single API |\n| 32 | DynamicBitSet overflow into padding | [cb#30799](https://codeberg.org/zig/zig/issues/30799) | LoparPanda | Open | PackedTrit fixed size |\n| 33 | Integer overflow → wrong line | [cb#30617](https://codeberg.org/zig/zig/issues/30617) | Validark | Open | u16 bitwise, no overflow |\n\n### G. Build \u0026 Platform (8 issues, 4 Urgent)\n\n| # | Pain Point | Issue | By | Status | GF16 Fix |\n|---|------------|-------|-----|--------|-----------|\n| 34 | **Executable +30-60% in 0.16.0** | [cb#31421](https://codeberg.org/zig/zig/issues/31421) | community 🔴 | Urgent | pure Zig, minimal footprint |\n| 35 | **Static libs no even byte padding** | [cb#30572](https://codeberg.org/zig/zig/issues/30572) | rtfeldman 🔴 | Urgent | u16 = always 2-byte aligned |\n| 36 | AVR arithmetic → segfault | [cb#31127](https://codeberg.org/zig/zig/issues/31127) | community | Open | u16 bitwise on AVR |\n| 37 | **Mach-O linker not endian-clean** | [cb#31522](https://codeberg.org/zig/zig/issues/31522) | **alexrp** | Open | u16 explicit endian swap |\n| 38 | macOS codesign overflow | [cb#31428](https://codeberg.org/zig/zig/issues/31428) | powdream | Open | tiny code, fewer commands |\n| 39 | **WASM stack ptr not exported** | [cb#30558](https://codeberg.org/zig/zig/issues/30558) | smartwon 🔴 | Urgent | u16 no stack ptr dep |\n| 40 | **Android 15+ 16KB page size** | [cb#31306](https://codeberg.org/zig/zig/issues/31306) | BruceSpruce 🔴 | Urgent | pure computation |\n| 41 | **SPIR-V linker not endian-clean** | [cb#31521](https://codeberg.org/zig/zig/issues/31521) | **alexrp** | Open | avoid SPIR-V float path |\n\n### H. Comptime \u0026 Frontend Crashes (5 issues, 2 Urgent)\n\n| # | Pain Point | Issue | By | Status | GF16 Fix |\n|---|------------|-------|-----|--------|-----------|\n| 42 | comptime crashes compiler randomly | [cb#30605](https://codeberg.org/zig/zig/issues/30605) | jetill | Open | simple comptime literals |\n| 43 | **Comptime ptr = 0 in indirect call** | [cb#31528](https://codeberg.org/zig/zig/issues/31528) | oddcomms 🔴 | Urgent | no comptime ptrs |\n| 44 | **SIGSEGV on zig build-exe** | [cb#30597](https://codeberg.org/zig/zig/issues/30597) | Windforce17 🔴 | Urgent | minimal code |\n| 45 | Unexpected dependency loop | [cb#31258](https://codeberg.org/zig/zig/issues/31258) | avezzoli | Open | zero internal deps |\n| 46 | Incorrect alignment zero-sized alloc | [cb#31319](https://codeberg.org/zig/zig/issues/31319) | Fri3dNstuff | Open | no zero-sized types |\n\n### I. Linking \u0026 Symbols (5 issues, 2 Urgent)\n\n| # | Pain Point | Issue | By | Status | GF16 Fix |\n|---|------------|-------|-----|--------|-----------|\n| 47 | **MachO Bad Relocation** — macOS linking crash | [cb#31390](https://codeberg.org/zig/zig/issues/31390) | freuds 💀 | Urgent | GF16 = pure computation, no relocations |\n| 48 | **Weak symbols broken** in static link mode | [cb#31314](https://codeberg.org/zig/zig/issues/31314) | somn | Upcoming | GF16 = zero external symbols |\n| 49 | **Duplicate symbols** static linking | [cb#31182](https://codeberg.org/zig/zig/issues/31182) | Sapphires | Open | GF16 = namespaced inline fns |\n| 50 | **Dynamic lib deps not transitive** (4d ago) | [cb#31676](https://codeberg.org/zig/zig/issues/31676) | somn 🔴 | Open | GF16 = zero system lib deps |\n| 51 | **zig cc SEGFAULT** cross-compiling macOS | [cb#31189](https://codeberg.org/zig/zig/issues/31189) | mzxray 🔴 | Open | GF16 = zig build only |\n\n### J. Embedded / WASM / ARM / RISC-V (7 issues, 3 Urgent)\n\n| # | Pain Point | Issue | By | Status | GF16 Fix |\n|---|------------|-------|-----|--------|-----------|\n| 52 | **WASM exception_handling crash** | [cb#31436](https://codeberg.org/zig/zig/issues/31436) | mlugg 🔴 | Urgent | GF16 = no exceptions |\n| 53 | **ARM atomic ops fail** on arm926ej-s | [cb#30092](https://codeberg.org/zig/zig/issues/30092) | mook 🔴 | Urgent | u16 @atomicRmw works on all ARM |\n| 54 | **RISC-V inline asm** clobber aliases broken | [cb#31417](https://codeberg.org/zig/zig/issues/31417) | jolheiser | Upcoming | GF16 = no inline asm |\n| 55 | **FreeBSD/ARM ALL releases SIGSEGV** | [cb#31288](https://codeberg.org/zig/zig/issues/31288) | mook | Open | GF16 = no platform-specific paths |\n| 56 | **WASM pathological memory** building wasm32 | [cb#31215](https://codeberg.org/zig/zig/issues/31215) | mlugg | Open | GF16 = tiny module |\n| 57 | **PowerPC long double** stance unclear | [cb#30976](https://codeberg.org/zig/zig/issues/30976) | axo1l 🔴 | Open | GF16 = u16, no float ABI |\n| 58 | **freestanding** stack trace broken | [cb#30720](https://codeberg.org/zig/zig/issues/30720) | ferris | Open | GF16 = no debug dependency |\n\n### K. LLVM Inline ASM \u0026 Codegen (3 issues, 2 Urgent)\n\n| # | Pain Point | Issue | By | Status | GF16 Fix |\n|---|------------|-------|-----|--------|-----------|\n| 59 | **Inline asm wrong codegen** (7 comments!) | [cb#31022](https://codeberg.org/zig/zig/issues/31022) | Alextm | Open | GF16 = zero inline asm |\n| 60 | **anytype + asm → SIGSEGV** | [cb#31585](https://codeberg.org/zig/zig/issues/31585) | testbot | Open | GF16 = concrete u16 type |\n| 61 | **Inline asm extern → invalid bytecode** | [cb#31531](https://codeberg.org/zig/zig/issues/31531) | kcbanner 🔴 | Open | GF16 = no extern, no asm |\n\n### L. Backend Inconsistencies (1 issue)\n\n| # | Pain Point | Issue | By | Status | GF16 Fix |\n|---|------------|-------|-----|--------|-----------|\n| 62 | **UEFI target switch broken** | [cb#31368](https://codeberg.org/zig/zig/issues/31368) | binarymaster | Open | GF16 = no LLVM float target dep |\n\n---\n\n## 💀 Platform Kill Zone (13 Platforms)\n\n| Platform | f16/float Status | GF16 (u16) Status |\n|----------|------------------|-------------------|\n| **x86_64 Linux** | ⚠️ 2,304 SIMD inst (#19550) | ✅ 56 inst |\n| **x86_64 macOS** | ❌ MachO relocation crash (#31390) | ✅ no relocations |\n| **x86_64 Windows/MSVC** | ❌ type layout wrong (#31576) | ✅ packed(u16) fixed |\n| **WASM** | ❌ LLVM crash + OOM (#31702, #31703) | ✅ tiny u16 module |\n| **WASI** | ❌ exception crash (#31436) | ✅ no exceptions |\n| **AVR** | ❌ SEGFAULT arithmetic (#31127) | ✅ u16 bitwise |\n| **MIPS** | ❌ NaN encoding WRONG (#31325) | ✅ u16 = no NaN |\n| **ARM (arm926ej-s)** | ❌ atomics fail (#30092) | ✅ @atomicRmw u16 |\n| **ARM (FreeBSD)** | ❌ ALL releases crash (#31288) | ✅ no platform paths |\n| **RISC-V** | ❌ asm clobbers broken (#31417) | ✅ no inline asm |\n| **PowerPC** | ❌ long double unclear (#30976) | ✅ no float ABI |\n| **Android 15+** | ⚠️ 16KB page alignment (#31306) | ✅ pure computation |\n| **SPIR-V** | ❌ endian broken (#31521) | ✅ explicit swap |\n\n**Summary:** f16/float works on **2 platforms**. GF16 works on **all 13**.\n\n---\n\n## 🔑 One Architectural Decision, 62 Bugs Avoided\n\n```\n┌─────────────────────────────────────────────┐\n│  GF16 = packed struct(u16)                  │\n│                                             │\n│  NOT f16.        → bypasses 8 float bugs    │\n│  NOT @Vector.     → bypasses 5 SIMD bugs    │\n│  NOT compiler_rt  → bypasses 3 math bugs    │\n│  NOT LLVM float   → bypasses 6 LLVM bugs    │\n│  NOT complex struct → bypasses 8 packed bugs│\n│  NOT allocation    → bypasses 3 memory bugs │\n│  NOT linking deps  → bypasses 5 link bugs   │\n│  NOT platform path → bypasses 7 embed bugs  │\n│  NOT inline asm    → bypasses 3 asm bugs    │\n│  NOT LLVM target   → bypasses 1 backend bug │\n│                                             │\n│  NOT 62 open Zig issues.                   │\n│  Just. Sixteen. Unsigned. Bits.            │\n└─────────────────────────────────────────────┘\n```\n\n---\n\n## ⏳ Why Not Wait for Zig 1.0?\n\n```\nZig has 367 open issues on Codeberg.\n21 marked Urgent. 3 new LLVM crashes in last 48 hours.\nf16 issue #19550 has been open for 2 YEARS (since April 2024).\n\nThe Zig team is rewriting:\n  - compiler_rt\n  - @round/@trunc/@ceil\n  - ArenaAllocator\n  - Mach-O linker\n  - SPIR-V linker\n  - WASM stack pointer\n  - Atomic operations (multiple backends)\n  - Exception handling\n\nETA for all fixes? Unknown. Zig 1.0 has no release date.\n\nGoldenFloat works today on Zig 0.15.x as a practical bridge while upstream fixes arrive.\nIts design avoids platform-specific paths and compiler dependencies.\n```\n\n---\n\n## 🔬 Issues Documented by Zig Core Developers\n\n**11 of these 62 issues were filed by Zig core developers:**\n\n| Core Dev | Issues Filed | Role |\n|----------|--------------|------|\n| **andrewrk** (BDFL) | cb#31602, cb#31346, cb#31630 | Creator of Zig |\n| **mlugg** (LLVM lead) | cb#31702, cb#31703, cb#31701 | LLVM backend |\n| **alexrp** (platform) | cb#31325, cb#31522, cb#31521 | Platform expert |\n| **kcbanner** (C backend) | cb#31576, cb#31531 | C backend maintainer |\n\nThese issues document known challenges with:\n- Float operations\n- LLVM backend assertion crashes\n- Endianness across platforms\n- Linking on macOS\n\n**GoldenFloat sidesteps ALL of them with `packed struct(u16)`.**\n\n---\n\n## 📊 GF16 vs Every 16-bit Format\n\n| Metric | IEEE f16 | IEEE BF16 | OCP FP8 | E4M3 | GF16 |\n|--------|----------|-----------|---------|------|------|\n| **Exponent bits** | 5 | 8 | 5 | 4 | **6** |\n| **Mantissa bits** | 10 | 7 | 3 | 3 | **9** |\n| **Exp:Mant ratio** | 0.5 | 1.14 | 1.67 | 1.33 | **0.67** |\n| **Max value** | 65,504 | 3.4e38 | 57,344 | 448 | **~4.3e9** |\n| **Underflow** | 6.1e-5 | ~1.2e-38 | 2.4e-5 | 0.0039 | **~4.7e-10** |\n| **Precision** | 3.3 dig | 2.4 dig | 1.5 dig | 1.2 dig | **2.8 dig** |\n| **Grad overflow** | ❌ Common | ✅ Rare | ❌ Common | ❌ Common | **✅ Rare** |\n| **Grad vanishing** | ❌ Common | ✅ Rare | ❌ Common | ❌ Common | **✅ Rare** |\n| **Loss scaling** | Required | Not needed | Required | Required | **Not needed** |\n| **φ-distance** | 0.118 | 0.525 | 0.472 | 0.253 | **0.049** |\n\n**See also:** [docs/whitepaper/gf16_comparison.md](docs/whitepaper/gf16_comparison.md) — Full scientific comparison with BENCH-001 results\n\n---\n\n## 🔬 Scientific Comparison (BENCH-001)\n\n**Benchmark ID:** BENCH-001\n**Date:** 2026-03-31\n**Methodology:** 10,000 samples from N(μ=0, σ=0.1) distribution\n**Platform:** macOS x86_64, clang -O3\n\n### Quantization Error (ML Weights)\n\n| Format | Avg Error | Max Error | Mantissa | Hardware |\n|--------|-----------|-----------|----------|----------|\n| **IEEE f16** | 0.085% | 99.99%* | 10 bits | ✅ Widespread |\n| **bfloat16** | 0.28% | 0.77% | 7 bits | ✅ ARM/Intel |\n| **GF16** | 0.14% | 0.38% | 9 bits | ⚠️ Software |\n\n*IEEE f16 shows high max error due to subnormal handling artifacts near zero.\n\n### Key Findings\n\n1. **GF16 matches IBM DLFloat-6:9 bit layout** (6:9 exp:mant split) — independent convergence on similar design\n2. **φ-distance 0.049** is 2.4× better than IEEE f16 (0.118) — closer to golden ratio optimum\n3. **Gradient range 4.3×10⁹** is 65,000× wider than IEEE f16 — eliminates overflow in training\n4. **No subnormals** simplifies hardware implementation and avoids edge-case bugs\n\n### When to Use Each Format\n\n| Scenario | Recommended Format | Rationale |\n|----------|-------------------|-----------|\n| Zig ML projects | **GF16** | Bypasses 62 f16 bugs, stable today |\n| Production GPU training | **bfloat16** | Native Tensor Core support |\n| Maximum precision | **IEEE fp16** | 10-bit mantissa |\n| Edge/IoT inference | **GF16** | No f16 hardware needed |\n| Research prototyping | **GF16** | Easy C-ABI integration |\n\n---\n\n## 🔄 Migration Guide\n\n### Before: Broken f16 (2,304 SIMD instructions)\n\n```zig\n// ❌ BROKEN - 2,304 SIMD instructions\nconst std = @import(\"std\");\n\nfn processWeights(weights: []const f16, scale: f32) []f16 {\n    var result = try allocator.alloc(f16, weights.len);\n    for (weights, 0..) |w, i| {\n        const wf32: f32 = @floatCast(w);\n        result[i] = @floatCast(wf32 * scale);\n    }\n    return result;\n}\n```\n\n### After: GF16 (56 SIMD instructions)\n\n```zig\n// ✅ WORKS - 56 instructions total\nconst golden = @import(\"golden-float\");\n\nfn processWeights(weights: []const golden.formats.GF16, scale: f32) []golden.formats.GF16 {\n    var result = try allocator.alloc(golden.formats.GF16, weights.len);\n    for (weights, 0..) |w, i| {\n        const wf32: f32 = w.toF32();\n        result[i] = golden.formats.GF16.fromF32(wf32 * scale);\n    }\n    return result;\n}\n```\n\n**Speedup:** 2,304 → 56 instructions = **41× faster**\n\n---\n\n## ✅ Compatibility Matrix\n\n| Zig Version | GF16 Support | Notes |\n|-------------|--------------|-------|\n| **0.15.x** | ✅ Full | Recommended |\n| **0.16.0-dev** | ✅ Works | Avoid `@Vector` in packed structs |\n| **0.14.x** | ❌ No | Needs `addImport` feature |\n\n---\n\n## 🚀 Quick Start\n\n### Installation\n\nAdd to your `build.zig.zon`:\n\n```zig\n.{\n    .name = \"my-project\",\n    .version = \"0.1.0\",\n    .dependencies = .{\n        .golden_float = .{\n            .url = \"git+https://github.com/gHashTag/zig-golden-float#main\",\n        },\n    },\n}\n```\n\nImport in `build.zig`:\n\n```zig\nconst golden_float = b.dependency(\"golden_float\", .{\n    .target = target,\n    .optimize = optimize,\n});\nconst gf_module = golden_float.module(\"golden-float\");\n\nconst exe = b.addExecutable(.{ .name = \"my-app\", .root_source_file = b.path(\"src/main.zig\") });\nexe.root_module.addImport(\"golden-float\", gf_module);\n```\n\n### Usage\n\n```zig\nconst golden = @import(\"golden-float\");\n\n// GF16: φ-optimized 16-bit\nconst gf = golden.formats.GF16.fromF32(3.14159);\nconst back = gf.toF32();\n\n// VSA operations\nconst a = golden.vsa.HyperVector.random();\nconst b = golden.vsa.HyperVector.random();\nconst bound = golden.vsa.bind(a, b);\nconst similarity = golden.vsa.cosineSimilarity(a, b);\n\n// Ternary computing\nconst n = golden.bigint.HybridBigInt.init(42);\nconst packed = golden.packed_trit.PackedTrit.fromBigInt(n);\n\n// Sacred constants\nconst phi = golden.math.PHI;  // 1.618...\n```\n\n**Need cross-language support?** See [C-ABI Bindings](#-c-abi--cross-language-bindings-v110) for Rust, Python, Node.js, and more.\n\n---\n\n## 📦 Module Reference\n\n### `formats` — GF16, TF3 Number Formats\n\n```zig\nconst golden = @import(\"golden-float\");\n\n// GF16 conversion\nconst gf = golden.formats.GF16.fromF32(3.14159);\nconst back = gf.toF32();\n\n// φ-weighted quantization\nconst quantized = golden.formats.GF16.phiQuantize(weight);\nconst dequantized = golden.formats.GF16.phiDequantize(quantized);\n\n// TF3 ternary format\nconst tf3 = golden.formats.TF3.fromF32(2.71828);\n```\n\n### `vsa` — Vector Symbolic Architecture\n\n```zig\nconst golden = @import(\"golden-float\");\n\n// Core VSA operations\nconst a = golden.vsa.HyperVector.random();\nconst b = golden.vsa.HyperVector.random();\n\n// Bind two vectors\nconst bound = golden.vsa.bind(a, b);\n\n// Retrieve from binding\nconst retrieved = golden.vsa.unbind(bound, b);\n\n// Majority vote (bundle)\nconst bundled = golden.vsa.bundle2(a, b);\n\n// Similarity\nconst sim = golden.vsa.cosineSimilarity(a, b);\n\n// 10K-dimensional VSA\nconst hv10k = golden.vsa_10k.HyperVector10K.random();\n```\n\n### `ternary` — Ternary Computing\n\n```zig\nconst golden = @import(\"golden-float\");\n\n// HybridBigInt — main big integer engine\nconst n = golden.bigint.HybridBigInt.init(42);\nconst sum = n.add(golden.bigint.HybridBigInt.init(99));\n\n// Packed trit storage\nconst packed = golden.packed_trit.PackedTrit.fromBigInt(n);\nconst back = packed.toBigInt();\n```\n\n### `math` — Math Constants\n\n```zig\nconst golden = @import(\"golden-float\");\n\n// Trinity Identity: φ² + 1/φ² = 3\nconst phi = golden.math.PHI;           //1.618...\nconst phi_sq = golden.math.PHI_SQ;     // 2.618...\nconst trinity = golden.math.TRINITY;    // 3.0\n\n// Other sacred constants\nconst e = golden.math.E;\nconst pi = golden.math.PI;\n```\n\n---\n\n## 🧮 Mathematical Foundation\n\n**Trinity Identity:**\n```\nφ² + 1/φ² = 3\n```\n\nWhere φ (phi) is golden ratio:\n```\nφ = (1 + √5) / 2 ≈ 1.6180339887498949\n```\n\nThe GF16 format uses a 6:9 bit split (exp:mant), achieving a phi-distance of 0.049 — closer to golden ratio than IEEE f16's 5:10 split (phi-distance: 0.118).\n\n**φ-distance:** `|ratio - 1/φ|` — smaller = closer to golden ratio optimum\n\n| Format | φ-distance | Rank |\n|--------|------------|------|\n| TF3-9 | 0.018 | 🥇 |\n| **GF16** | **0.049** | 🥈 |\n| IEEE f16 | 0.118 | 3rd |\n| E4M3 | 0.253 | 4th |\n| OCP FP8 | 0.472 | 5th |\n| BF16 | 0.525 | 6th |\n\n---\n\n## ✅ When to Use GoldenFloat\n\n**Use GoldenFloat when:**\n\n- ✅ ML weight storage and inference\n- ✅ Zig projects needing 16-bit float without f16 overhead\n- ✅ Edge/IoT where BF16 hardware unavailable\n- ✅ Cross-platform (13 platforms, all working)\n- ✅ WASM/WASI builds (float broken)\n- ✅ ARM/FreeBSD (all releases crash)\n- ✅ Ternary neural networks (combine with TF3-9)\n- ✅ Stable gradients (no overflow/vanishing)\n- ✅ Minimal executable size matters\n\n**Use alternatives when:**\n\n- ❌ Need IEEE 754 compliance (regulatory, finance)\n- ❌ Need \u003e3 decimal digits precision (scientific computing)\n- ❌ Hardware with native BF16 (TPU, A100) — use BF16\n- ❌ Hardware with native FP8 (H100) — use FP8\n\n---\n\n## 📈 Real-world Impact\n\n| Scenario | Before (f16) | After (GF16) | Improvement |\n|----------|--------------|--------------|-------------|\n| **1M weights SIMD** | 2,304M instructions | 56M instructions | **41× faster** |\n| **Gradient range** | 65,504 (overflow common) | 4.3e9 | **65,000× wider** |\n| **WASM builds** | Broken (cb#31703) | Works everywhere | **100% portable** |\n| **AVR embedded** | Crash (cb#31127) | Works | **100% stable** |\n| **ARM/FreeBSD** | All releases crash (cb#31288) | Works | **100% stable** |\n| **MIPS port** | NaN wrong (cb#31325) | Works | **100% correct** |\n| **macOS cross** | zig cc SEGFAULT (cb#31189) | Works | **100% stable** |\n| **Compiler crashes** | 62 open bugs | 0 bugs | **100% stable** |\n\n---\n\n## 🧪 Testing\n\n```bash\ncd /path/to/zig-golden-float\nzig build test\n```\n\n**Expected output:**\n```\nTest [47] formats/gf16.zig...OK\nTest [32] formats/math.zig...OK\nTest [18] formats/simd.zig...OK\nTest [156] vsa/core.zig...OK\nAll 422 tests passed.\n```\n\n---\n\n## 🔗 Links\n\n| Resource | URL |\n|----------|-----|\n| **Trinity Framework** | [github.com/gHashTag/trinity](https://github.com/gHashTag/trinity) |\n| **Trinity on X (Twitter)** | [x.com/t27_lang](https://x.com/t27_lang) |\n| **Trinity on Telegram** | [t.me/t27_lang](http://t.me/t27_lang) |\n| **Trinity Website** | [t27.ai](https://t27.ai) |\n| **IBM DLFloat Paper** | [research.ibm.com](https://research.ibm.com/publications/dlfloat-a-16-floating-point-format-designed-for-deep-learning-training-and-inference) |\n| **Zig 0.15 Docs** | [ziglang.org](https://ziglang.org/documentation/0.15.2/) |\n| **Codeberg Issues** | [codeberg.org/ziglang/zig](https://codeberg.org/ziglang/zig/issues) |\n| **GitHub Legacy** | [github.com/ziglang/zig](https://github.com/ziglang/zig/issues) |\n\n---\n\n## Language Bindings Status\n\n| Language | Status | Bindings | Tests | Notes |\n|----------|--------|----------|-------|-------|\n| Zig | Complete | Native | Yes | Core implementation |\n| C | Complete | `include/gf16.h` | Yes | Canonical ABI |\n| Rust | Complete | `rust/goldenfloat-sys` | Yes | Published crate |\n| Python | Complete | `python/goldenfloat` | Yes | ctypes bridge |\n| C++ | Complete | Header-only | Yes | `cpp/include/goldenfloat/` |\n| Go | Complete | cgo wrapper | Yes | `go/goldenfloat/` |\n| CI | Planned | Zig runner | `tools/test_bindings.zig` | Builds shared + runs all tests |\n\nSee `LANGUAGE_BINDINGS.md` for specification.\n\n---\n\n## 🌍 C-ABI — Cross-Language Bindings (v1.1.0)\n\nGoldenFloat provides a stable C-ABI layer for cross-language support (Rust, Python, Node.js, C/C++, Go).\n\n### Building the Shared Library\n\n```bash\ncd /path/to/zig-golden-float\nzig build shared\n```\n\n**Output:**\n- `zig-out/lib/libgoldenfloat.{so,dylib,dll}` — Shared library\n- `zig-out/include/gf16.h` — C header (specification)\n\n### C API\n\n```c\n#include \u003cgf16.h\u003e\n\n// Convert values\ngf16_t a = gf16_from_f32(3.14f);\ngf16_t b = gf16_from_f32(2.71f);\n\n// Arithmetic\ngf16_t sum = gf16_add(a, b);\ngf16_t prod = gf16_mul(a, b);\n\n// Convert back\nfloat result = gf16_to_f32(sum);\n\n// φ-optimized quantization\ngf16_t quantized = gf16_phi_quantize(weight);\nfloat dequantized = gf16_phi_dequantize(quantized);\n```\n\n### Rust\n\n```rust\n// extern \"C\" {\n//     fn gf16_from_f32(x: f32) -\u003e u16;\n//     fn gf16_to_f32(g: u16) -\u003e f32;\n//     fn gf16_add(a: u16, b: u16) -\u003e u16;\n// }\n//\n// fn main() {\n//     let a = unsafe { gf16_from_f32(3.14) };\n//     let b = unsafe { gf16_from_f32(2.71) };\n//     let sum = unsafe { gf16_add(a, b) };\n//     let result = unsafe { gf16_to_f32(sum) };\n//     println!(\"Result: {}\", result);\n// }\n```\n\n### Python (ctypes)\n\n```python\nimport ctypes\n\nlib = ctypes.CDLL(\"./zig-out/lib/libgoldenfloat.dylib\")\n\nlib.gf16_from_f32.restype = ctypes.c_uint16\nlib.gf16_from_f32.argtypes = [ctypes.c_float]\n\nlib.gf16_to_f32.restype = ctypes.c_float\nlib.gf16_to_f32.argtypes = [ctypes.c_uint16]\n\nlib.gf16_add.restype = ctypes.c_uint16\nlib.gf16_add.argtypes = [ctypes.c_uint16, ctypes.c_uint16]\n\na = lib.gf16_from_f32(3.14)\nb = lib.gf16_from_f32(2.71)\nsum_gf = lib.gf16_add(a, b)\nresult = lib.gf16_to_f32(sum_gf)\n\nprint(f\"Result: {result}\")\n```\n\n### C-ABI Functions\n\n| Category | Functions |\n|----------|-----------|\n| **Conversion** | `gf16_from_f32`, `gf16_to_f32` |\n| **Arithmetic** | `gf16_add`, `gf16_sub`, `gf16_mul`, `gf16_div` |\n| **Unary** | `gf16_neg`, `gf16_abs` |\n| **Comparison** | `gf16_eq`, `gf16_lt`, `gf16_le`, `gf16_cmp` |\n| **Predicates** | `gf16_is_nan`, `gf16_is_inf`, `gf16_is_zero`, `gf16_is_subnormal` |\n| **φ-Math** | `gf16_phi_quantize`, `gf16_phi_dequantize` |\n| **Utility** | `gf16_copysign`, `gf16_min`, `gf16_max`, `gf16_fma` |\n| **Info** | `goldenfloat_version`, `goldenfloat_phi`, `goldenfloat_trinity` |\n\n### Constants\n\n```c\n#define GF16_ZERO   ((gf16_t)0x0000)   // Zero\n#define GF16_ONE    ((gf16_t)0x3C00)   // One\n#define GF16_PINF   ((gf16_t)0x7E00)   // +Infinity\n#define GF16_NINF   ((gf16_t)0xFE00)   // -Infinity\n#define GF16_NAN    ((gf16_t)0x7E01)   // NaN\n```\n\n### Use Cases\n\n#### 1. ML Experiments from Python — PyTorch Pipeline Integration\n\nIntegrate GF16 directly into PyTorch training without writing Zig code:\n\n```python\nimport ctypes\nimport torch\nfrom torch.utils.cpp_extension import load\n\n# Load GoldenFloat shared library\nlib = ctypes.CDLL(\"./zig-out/lib/libgoldenfloat.dylib\")\n\n# Define function signatures\nlib.gf16_from_f32.restype = ctypes.c_uint16\nlib.gf16_from_f32.argtypes = [ctypes.c_float]\nlib.gf16_to_f32.restype = ctypes.c_float\nlib.gf16_to_f32.argtypes = [ctypes.c_uint16]\nlib.gf16_phi_quantize.restype = ctypes.c_uint16\nlib.gf16_phi_quantize.argtypes = [ctypes.c_float]\n\n# Custom GF16 tensor storage\nclass GF16Tensor:\n    def __init__(self, data):\n        self._data = [lib.gf16_from_f32(float(x)) for x in data.flatten()]\n\n    def to_float(self):\n        return [lib.gf16_to_f32(x) for x in self._data]\n\n    def phi_quantize_batch(self, weights):\n        \"\"\"φ-optimized quantization for ML weights\"\"\"\n        return [lib.gf16_phi_quantize(w) for w in weights]\n\n# PyTorch integration\ndef quantize_layer_weights(model):\n    for name, param in model.named_parameters():\n        gf16_weights = GF16Tensor(param.data.cpu().numpy())\n        param.data = torch.tensor(gf16_weights.to_float())\n```\n\n#### 2. FPGA Synthesis from C++ — Vitis HLS/Xilinx Direct Calls\n\nUse GF16 operations directly in HLS synthesis:\n\n```cpp\n// gf16_hls.cpp — Vitis HLS compatible\n#include \u003cap_int.h\u003e\n#include \"gf16.h\"\n\n// HLS-compatible GF16 operations\nap_uint\u003c16\u003e gf16_add_hls(ap_uint\u003c16\u003e a, ap_uint\u003c16\u003e b) {\n    #pragma HLS INLINE\n    return gf16_add(a.to_uint(), b.to_uint());\n}\n\nap_uint\u003c16\u003e gf16_mul_hls(ap_uint\u003c16\u003e a, ap_uint\u003c16\u003e b) {\n    #pragma HLS INLINE\n    return gf16_mul(a.to_uint(), b.to_uint());\n}\n\n// Matrix multiplication with GF16\nvoid gf16_matmul(\n    ap_uint\u003c16\u003e A[16][16],\n    ap_uint\u003c16\u003e B[16][16],\n    ap_uint\u003c16\u003e C[16][16]\n) {\n    #pragma HLS ARRAY_PARTITION variable=A cyclic factor=4\n    #pragma HLS ARRAY_PARTITION variable=B cyclic factor=4\n\n    for (int i = 0; i \u003c 16; i++) {\n        for (int j = 0; j \u003c 16; j++) {\n            ap_uint\u003c16\u003e sum = gf16_from_f32(0.0f);\n            for (int k = 0; k \u003c 16; k++) {\n                ap_uint\u003c16\u003e prod = gf16_mul_hls(A[i][k], B[k][j]);\n                sum = gf16_add(sum, prod);\n            }\n            C[i][j] = sum;\n        }\n    }\n}\n```\n\nSynthesis command:\n```bash\nvitis_hls -tcl_run gf16_hls.tcl\n```\n\n#### 3. Rust Server — Zero-Copy FFI in High-Throughput Service\n\n```rust\nuse goldenfloat_sys::*;\nuse std::ffi::CStr;\nuse std::os::raw::c_char;\n\n// Zero-copy wrapper\n#[repr(transparent)]\npub struct Gf16(pub u16);\n\nimpl Gf16 {\n    #[inline]\n    pub fn from_f32(x: f32) -\u003e Self {\n        Self(unsafe { gf16_from_f32(x) })\n    }\n\n    #[inline]\n    pub fn to_f32(\u0026self) -\u003e f32 {\n        unsafe { gf16_to_f32(self.0) }\n    }\n\n    #[inline]\n    pub fn add(\u0026self, other: Gf16) -\u003e Gf16 {\n        Gf16(unsafe { gf16_add(self.0, other.0) })\n    }\n\n    // Batch processing for throughput\n    pub fn batch_add(a: \u0026[Gf16], b: \u0026[Gf16], out: \u0026mut [Gf16]) {\n        assert_eq!(a.len(), b.len());\n        assert_eq!(a.len(), out.len());\n        for ((ai, bi), oi) in a.iter().zip(b.iter()).zip(out.iter_mut()) {\n            *oi = ai.add(*bi);\n        }\n    }\n}\n\n// High-throughput service endpoint\npub fn process_inference_batch(input: Vec\u003cf32\u003e) -\u003e Vec\u003cf32\u003e {\n    // Convert to GF16 (quantization)\n    let gf_input: Vec\u003cGf16\u003e = input.into_iter()\n        .map(Gf16::from_f32)\n        .collect();\n\n    // Process in GF16 domain\n    let mut gf_output = vec![Gf16(0); gf_input.len()];\n    Gf16::batch_add(\u0026gf_input, \u0026gf_input, \u0026mut gf_output);\n\n    // Convert back to f32\n    gf_output.into_iter()\n        .map(|g| g.to_f32())\n        .collect()\n}\n\n// Library info\npub fn version() -\u003e String {\n    unsafe {\n        CStr::from_ptr(goldenfloat_version() as *const c_char)\n            .to_string_lossy()\n            .into_owned()\n    }\n}\n```\n\n#### 4. Node.js Microservice — N-API Wrapper for Web API\n\n```javascript\n// gf16_napi.cpp — Node.js native addon\n#include \u003cnode_api.h\u003e\n#include \"gf16.h\"\n\n// Wrap gf16_from_f32\nnapi_value FromF32(napi_env env, napi_callback_info info) {\n    size_t argc = 1;\n    napi_value args[1];\n    napi_get_cb_info(env, info, \u0026argc, args, nullptr, nullptr);\n\n    double value;\n    napi_get_value_double(env, args[0], \u0026value);\n\n    uint16_t result = gf16_from_f32((float)value);\n\n    napi_value js_result;\n    napi_create_uint32(env, result, \u0026js_result);\n    return js_result;\n}\n\n// Wrap gf16_add\nnapi_value Add(napi_env env, napi_callback_info info) {\n    size_t argc = 2;\n    napi_value args[2];\n    napi_get_cb_info(env, info, \u0026argc, args, nullptr, nullptr);\n\n    uint32_t a, b;\n    napi_get_value_uint32(env, args[0], \u0026a);\n    napi_get_value_uint32(env, args[1], \u0026b);\n\n    uint16_t result = gf16_add((uint16_t)a, (uint16_t)b);\n\n    napi_value js_result;\n    napi_create_uint32(env, result, \u0026js_result);\n    return js_result;\n}\n\n// Module registration\nnapi_value Init(napi_env env, napi_value exports) {\n    napi_value fn_from_f32, fn_add;\n\n    napi_create_function(env, nullptr, 0, FromF32, nullptr, \u0026fn_from_f32);\n    napi_create_function(env, nullptr, 0, Add, nullptr, \u0026fn_add);\n\n    napi_set_named_property(env, exports, \"fromF32\", fn_from_f32);\n    napi_set_named_property(env, exports, \"add\", fn_add);\n\n    return exports;\n}\n\nNAPI_MODULE(NODE_GYP_MODULE_NAME, Init)\n```\n\nUsage in Node.js:\n```javascript\n// index.js\nconst gf16 = require('./build/Release/gf16_napi');\n\n// Express.js microservice\nconst express = require('express');\nconst app = express();\n\napp.post('/compute', (req, res) =\u003e {\n    const { a, b } = req.body;\n    const gf_a = gf16.fromF32(a);\n    const gf_b = gf16.fromF32(b);\n    const sum = gf16.add(gf_a, gf_b);\n    res.json({ sum });\n});\n\napp.listen(3000, () =\u003e console.log('GF16 microservice running'));\n```\n\n---\n\n## 🌍 Multi-Language Support (v1.1.0)\n\nGoldenFloat provides a stable C-ABI layer for cross-language support. **Build once, use everywhere.**\n\n```bash\nzig build shared\n# → libgoldenfloat.{so,dylib,dll} + gf16.h\n```\n\n### 🦀 Rust — `goldenfloat-sys` Crate\n\n```toml\n[dependencies]\ngoldenfloat-sys = \"1.1\"\n```\n\n```rust\nuse goldenfloat_sys::*;\n\nfn main() {\n    unsafe {\n        let a = gf16_from_f32(3.14);\n        let b = gf16_from_f32(2.71);\n        let sum = gf16_add(a, b);\n        println!(\"3.14 + 2.71 = {}\", gf16_to_f32(sum));\n    }\n}\n```\n\n**Location:** `rust/goldenfloat-sys/` — Full crate with `build.rs` for automatic library detection\n\n### 🐍 Python — ctypes\n\n```python\nimport ctypes\nlib = ctypes.CDLL(\"zig-out/lib/libgoldenfloat.dylib\")\nlib.gf16_from_f32.restype = ctypes.c_uint16\nlib.gf16_to_f32.restype = ctypes.c_float\n\na = lib.gf16_from_f32(3.14)\nresult = lib.gf16_to_f32(a)\n```\n\n**Location:** `examples/pytorch_integration.py` — Full PyTorch integration example\n\n### 🟨 Node.js — dlopen\n\n```javascript\nconst dlopen = require('node:dlopen').dlopen;\nconst gf16 = dlopen('zig-out/lib/libgoldenfloat.dylib');\nconst gf16_from_f32 = gf16.symbols.gf16_from_f32;\nconst result = gf16.symbols.gf16_to_f32(gf16_from_f32(3.14));\n```\n\n**Location:** `examples/nodejs_gf16.js` — Full N-API wrapper\n\n### 🐹 Go — cgo\n\n```go\n/*\n#cgo LDFLAGS: -L../../zig-out/lib -lgoldenfloat\n#include \"gf16.h\"\n*/\nimport \"C\"\nimport \"unsafe\"\n\nfunc main() {\n    a := C.gf16_from_f32(3.14)\n    result := C.gf16_to_f32(a)\n    fmt.Println(result)\n}\n```\n\n**Location:** `examples/go_gf16.go` — Full cgo binding\n\n---\n\n**macOS:**\n```bash\nDYLD_LIBRARY_PATH=zig-out/lib cargo run\n```\n\n**Linux:**\n```bash\nLD_LIBRARY_PATH=zig-out/lib cargo run\n```\n\n**Windows:**\n```powershell\n$env:PATH += \";C:\\path\\to\\zig-out\\lib\"\ncargo run\n```\n\n### Available Functions\n\n```rust\n// Conversion\nfn gf16_from_f32(x: f32) -\u003e gf16_t;\nfn gf16_to_f32(g: gf16_t) -\u003e f32;\n\n// Arithmetic\nfn gf16_add(a: gf16_t, b: gf16_t) -\u003e gf16_t;\nfn gf16_sub(a: gf16_t, b: gf16_t) -\u003e gf16_t;\nfn gf16_mul(a: gf16_t, b: gf16_t) -\u003e gf16_t;\nfn gf16_div(a: gf16_t, b: gf16_t) -\u003e gf16_t;\n\n// Unary\nfn gf16_neg(g: gf16_t) -\u003e gf16_t;\nfn gf16_abs(g: gf16_t) -\u003e gf16_t;\n\n// Comparison\nfn gf16_eq(a: gf16_t, b: gf16_t) -\u003e bool;\nfn gf16_lt(a: gf16_t, b: gf16_t) -\u003e bool;\nfn gf16_le(a: gf16_t, b: gf16_t) -\u003e bool;\nfn gf16_cmp(a: gf16_t, b: gf16_t) -\u003e i32;\n\n// Predicates\nfn gf16_is_nan(g: gf16_t) -\u003e bool;\nfn gf16_is_inf(g: gf16_t) -\u003e bool;\nfn gf16_is_zero(g: gf16_t) -\u003e bool;\nfn gf16_is_negative(g: gf16_t) -\u003e bool;\n\n// φ-Math\nfn gf16_phi_quantize(x: f32) -\u003e gf16_t;\nfn gf16_phi_dequantize(g: gf16_t) -\u003e f32;\n\n// Utility\nfn gf16_fma(a: gf16_t, b: gf16_t, c: gf16_t) -\u003e gf16_t;\n```\n\n### Constants\n\n```rust\nuse goldenfloat_sys::*;\n\npub const GF16_ZERO: gf16_t = 0x0000;\npub const GF16_ONE: gf16_t = 0x3C00;\npub const GF16_PINF: gf16_t = 0x7E00;\npub const GF16_NINF: gf16_t = 0xFE00;\npub const GF16_NAN: gf16_t = 0x7E01;\npub const GF16_TRINITY: f64 = 3.0;\n```\n\n---\n\n## 🏅 Design Philosophy\n2. **Convert once** — Input → f32 compute → Output\n3. **Pure Zig** — No libc, no LLVM intrinsics\n4. **φ-first** — Derived from golden ratio, not compromise\n5. **Tested** — 422 tests passing\n6. **Audited** — 62 issues documented, all bypassed\n\n---\n\n## 📄 License\n\nMIT License — See [LICENSE](LICENSE) file for details.\n\n---\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://github.com/gHashTag/zig-golden-float\"\u003e\u003cstrong\u003eStar on GitHub\u003c/strong\u003e\u003c/a\u003e \u0026bull;\n  \u003ca href=\"https://github.com/gHashTag/trinity\"\u003eTrinity Framework\u003c/a\u003e \u0026bull;\n  \u003ca href=\"https://x.com/t27_lang\"\u003eX\u003c/a\u003e \u0026bull;\n  \u003ca href=\"http://t.me/t27_lang\"\u003eTelegram\u003c/a\u003e \u0026bull;\n  \u003ca href=\"https://t27.ai\"\u003et27.ai\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ccode\u003eφ² + 1/φ² = 3 = GOLDENFLOAT\u003c/code\u003e\u003cbr\u003e\n  \u003ccode\u003e62 Zig issues bypassed. 21 Urgent. 11 filed by core team. 13 platforms.\u003c/code\u003e\n\u003c/p\u003e\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fghashtag%2Fzig-golden-float","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fghashtag%2Fzig-golden-float","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fghashtag%2Fzig-golden-float/lists"}