{"id":45921540,"url":"https://github.com/octoflow-lang/octoflow","last_synced_at":"2026-03-10T18:00:23.656Z","repository":{"id":339462755,"uuid":"1161652378","full_name":"octoflow-lang/octoflow","owner":"octoflow-lang","description":"GPU-Native Programming Language. 3.2 MB binary.  Any GPU. Zero dependencies.","archived":false,"fork":false,"pushed_at":"2026-02-28T12:24:43.000Z","size":1803,"stargazers_count":3,"open_issues_count":5,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-02-28T12:43:07.220Z","etag":null,"topics":["gpu-computing","gpu-programming","llm-inference","machine-learning","parallel-computing","programming-language","rust","spir-v","vulkan","zero-dependencies"],"latest_commit_sha":null,"homepage":"https://octoflow-lang.github.io/octoflow/","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/octoflow-lang.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":"docs/roadmap.md","authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-02-19T11:05:25.000Z","updated_at":"2026-02-28T12:24:46.000Z","dependencies_parsed_at":"2026-02-28T09:02:03.138Z","dependency_job_id":null,"html_url":"https://github.com/octoflow-lang/octoflow","commit_stats":null,"previous_names":["octoflow-lang/octoflow"],"tags_count":8,"template":false,"template_full_name":null,"purl":"pkg:github/octoflow-lang/octoflow","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/octoflow-lang%2Foctoflow","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/octoflow-lang%2Foctoflow/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/octoflow-lang%2Foctoflow/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/octoflow-lang%2Foctoflow/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/octoflow-lang","download_url":"https://codeload.github.com/octoflow-lang/octoflow/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/octoflow-lang%2Foctoflow/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30346475,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-10T15:55:29.454Z","status":"ssl_error","status_checked_at":"2026-03-10T15:54:58.440Z","response_time":106,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["gpu-computing","gpu-programming","llm-inference","machine-learning","parallel-computing","programming-language","rust","spir-v","vulkan","zero-dependencies"],"created_at":"2026-02-28T08:50:43.360Z","updated_at":"2026-03-10T18:00:23.632Z","avatar_url":"https://github.com/octoflow-lang.png","language":null,"readme":"\u003cp align=\"center\"\u003e\n  \u003cimg src=\"assets/logo.png\" width=\"140\" alt=\"OctoFlow\"\u003e\n\u003c/p\u003e\n\n\u003ch1 align=\"center\"\u003eOctoFlow\u003c/h1\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cstrong\u003eA GPU-native programming language.\u003c/strong\u003e\u003cbr\u003e\n  4.5 MB binary. Zero dependencies. Any GPU vendor. One file download.\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://github.com/octoflow-lang/octoflow/releases/latest\"\u003e\u003cimg alt=\"Release\" src=\"https://img.shields.io/github/v/release/octoflow-lang/octoflow?color=1a9a9a\u0026style=flat-square\"\u003e\u003c/a\u003e\n  \u003ca href=\"https://github.com/octoflow-lang/octoflow/blob/main/LICENSE\"\u003e\u003cimg alt=\"License\" src=\"https://img.shields.io/github/license/octoflow-lang/octoflow?color=1a9a9a\u0026style=flat-square\"\u003e\u003c/a\u003e\n  \u003ca href=\"https://octoflow-lang.github.io/octoflow/\"\u003e\u003cimg alt=\"Website\" src=\"https://img.shields.io/badge/website-octoflow--lang.github.io-1a9a9a?style=flat-square\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"#quickstart\"\u003eQuickstart\u003c/a\u003e \u0026bull;\n  \u003ca href=\"#what-it-looks-like\"\u003eCode Examples\u003c/a\u003e \u0026bull;\n  \u003ca href=\"#the-loom-engine\"\u003eLoom Engine\u003c/a\u003e \u0026bull;\n  \u003ca href=\"#how-this-was-built\"\u003eHow This Was Built\u003c/a\u003e \u0026bull;\n  \u003ca href=\"#looking-for-maintainers\"\u003eLooking for Maintainers\u003c/a\u003e\n\u003c/p\u003e\n\n---\n\n## What is OctoFlow?\n\nOctoFlow is a general-purpose programming language where the GPU is the primary execution target. Not a wrapper around CUDA. Not a shader language. A complete language — with functions, structs, modules, streams, error handling — that happens to run compute on the GPU by default.\n\n```\nlet a = gpu_fill(1.0, 10000000)\nlet b = gpu_fill(2.0, 10000000)\nlet c = gpu_add(a, b)\nprint(\"Sum: {gpu_sum(c)}\")           // 30000000 — computed on GPU\n```\n\nNo SDK. No driver toolkit. No package manager. Download one binary, run it.\n\n### At a glance\n\n| | |\n|---|---|\n| **Binary size** | 4.5 MB (single file, all platforms) |\n| **Dependencies** | Zero. Hand-rolled Vulkan bindings, nothing external |\n| **GPU support** | NVIDIA, AMD, Intel — anything with Vulkan |\n| **Stdlib** | 445 modules across 28 domains |\n| **GPU kernels** | 150 pre-compiled SPIR-V shaders, embedded in binary |\n| **Tests** | 966 passing |\n| **License** | MIT (stdlib + everything in this repo) |\n\n---\n\n## Quickstart\n\n### Install\n\n**Windows** (PowerShell):\n```powershell\nirm https://octoflow-lang.github.io/octoflow/install.ps1 | iex\n```\n\n**Linux** (bash):\n```bash\ncurl -fsSL https://octoflow-lang.github.io/octoflow/install.sh | sh\n```\n\n**macOS (Apple Silicon)**:\n```bash\n# Download the latest `octoflow-*-aarch64-macos.tar.gz` from Releases,\n# then run:\ntar xzf octoflow-*-aarch64-macos.tar.gz\nchmod +x octoflow\nmv octoflow /usr/local/bin/\noctoflow --version\n```\n\nOr download directly from [Releases](https://github.com/octoflow-lang/octoflow/releases/latest).\n\n### Run\n\n```bash\noctoflow run hello.flow          # run a program\noctoflow repl                    # interactive REPL\noctoflow chat                    # AI-assisted code generation\noctoflow check file.flow         # static analysis\n```\n\n---\n\n## What It Looks Like\n\n### GPU compute in 5 lines\n\n```\nlet a = gpu_fill(1.0, 1000000)\nlet b = gpu_fill(2.0, 1000000)\nlet c = gpu_add(a, b)\nlet d = gpu_scale(c, 0.5)\nprint(\"Total: {gpu_sum(d)}\")       // 1500000\n```\n\nData born on the GPU stays on the GPU. No round-trips until you need the result.\n\n### Functional programming\n\n```\nlet nums = [1, 2, 3, 4, 5, 6, 7, 8]\nlet evens = filter(nums, fn(x) x % 2 == 0 end)\nlet squared = map_each(evens, fn(x) x * x end)\nlet total = reduce(squared, 0, fn(acc, x) acc + x end)\nprint(\"Sum of even squares: {total}\")   // 120\n```\n\n### Stream pipelines\n\n```\nstream photo = tap(\"input.jpg\")\nstream enhanced = photo\n    |\u003e brightness(20)\n    |\u003e contrast(1.2)\n    |\u003e saturate(1.1)\nemit(enhanced, \"output.png\")\n```\n\n### Data analysis\n\n```\nuse csv\nuse descriptive\n\nlet data = read_csv(\"sales.csv\")\nlet revenue = csv_column(data, \"revenue\")\n\nprint(\"Mean:   {mean(revenue)}\")\nprint(\"Median: {median(revenue)}\")\nprint(\"P95:    {quantile(revenue, 0.95)}\")\n```\n\n### Error handling\n\n```\nlet result = try(read_file(\"data.txt\"))\nif result.ok\n  print(\"Read {len(result.value)} chars\")\nelse\n  print(\"Error: \" + result.error)\nend\n```\n\nEvery error returns a structured code (E001-E099) with a human-readable fix action.\n\n---\n\n## The Loom Engine\n\nThe Loom Engine is what makes OctoFlow different from \"GPU library with a scripting layer.\"\n\n**The idea:** Queue an entire dispatch chain — hundreds or thousands of GPU kernels — into a single `vkQueueSubmit`. The GPU executes the full pipeline autonomously. Zero CPU interruption.\n\n```\nlet vm = loom_boot(1, 0, 16)\nloom_write(vm, 0, data)\nloom_dispatch(vm, \"kernel.spv\", [0, 3, 8], 1)\nlet prog = loom_build(vm)\nloom_run(prog)\nlet result = loom_read_globals(vm, 0, 8)\nloom_free(prog)\nloom_shutdown(vm)\n```\n\nOr use the express API:\n\n```\nlet result = loom_compute(\"kernel.spv\", data, 1024)\n```\n\n**Three tiers of GPU access:**\n- **Tier 1** — One-call ops: `gpu_fill`, `gpu_add`, `gpu_sum`, `gpu_matmul` (simple, immediate)\n- **Tier 2** — Dispatch chains: `loom_boot` → `loom_dispatch` → `loom_build` → `loom_run` (custom pipelines)\n- **Tier 3** — JIT SPIR-V: `ir_begin` → `ir_entry` → ... → `ir_finalize` (generate kernels at runtime)\n\n---\n\n## Standard Library\n\n445 modules. All written in OctoFlow itself. All MIT-licensed and in this repo.\n\n| Domain | What's in it |\n|--------|-------------|\n| **AI \u0026 LLM** | Transformer inference, GGUF loader, BPE tokenizer, streaming generation |\n| **GPU** | 150 kernels, Loom Engine, SPIR-V codegen, dispatch chains, resident buffers |\n| **Media** | Audio DSP, image transforms, video timeline, WAV/BMP/GIF/H.264/MP4/AVI/TTF codecs |\n| **ML** | Regression, classification, clustering, neural networks, decision trees, ensembles |\n| **Statistics** | Descriptive stats, distributions, hypothesis testing, time series, risk metrics |\n| **Science** | Linear algebra, calculus, physics, signal processing, interpolation, optimization |\n| **Data** | CSV, JSON, pipelines, validation, transforms |\n| **Web** | HTTP client/server, URL parsing |\n| **GUI** | Canvas, widgets, layout, ECS, theming, physics2d |\n| **DB** | In-memory columnar database with query engine |\n| **Crypto** | Hashing, encoding, base64, hex |\n| **System** | File I/O, environment, datetime, platform detection, process control |\n\n---\n\n## How This Was Built\n\nOctoFlow is **AI-assisted** from the beginning. LLMs generated the bulk of the code. This is not a secret and not a caveat. It's the point.\n\nBut \"AI-assisted\" does not mean \"unreviewed.\" Every architectural decision has a human at the gate:\n\n- **Rust at the OS boundary, .flow for everything else** — human decision\n- **Pure Vulkan, no vendor SDK** — human decision\n- **Zero external dependencies** — human decision\n- **Loom Engine's dispatch chain model** — human decision\n- **23-concept language spec that fits in an LLM prompt** — human decision\n- **JIT SPIR-V emission via IR builder** — human decision\n- **Self-hosting compiler direction** — human decision\n\nThe AI writes code. The human decides what to build, why, and whether it ships.\n\n### The philosophy behind this\n\nTwo principles guide every decision:\n\n**Sustainability** — Can this trajectory continue? Is this adding complexity faster than it can be maintained? Is the test count rising? Is the gotcha list shrinking? If the answer to any of these is \"no,\" the developer stops and fixes before shipping more.\n\n**Empowerment** — Does this increase the user's capacity? Can a non-GPU-programmer go from intent to working GPU code? Does the LLM need *less* help generating correct OctoFlow over time? If a feature makes the language harder to learn or harder for AI to generate, it doesn't ship.\n\nThese aren't marketing. They're the actual decision framework. Every feature, every refactor, every new builtin gets scored against them. Better to ship less and ship right.\n\n---\n\n## Project Status\n\nOctoFlow is **real, working software** — not a concept or prototype. The compiler runs, the GPU dispatches, the tests pass, the demos are live. You can download it right now and run GPU compute.\n\nThat said, it's honest to say:\n\n- **v1.5.9** — actively developed, not yet battle-tested by a community\n- **Solo developer** — one person plus AI tools, which is both the strength (fast iteration, coherent vision) and the limitation (bus factor of 1)\n- **Compiler is private** — the stdlib, examples, docs, and everything in this repo are MIT. The compiler Rust source is in a private repo for now. See below.\n\n### What works well today\n\n- GPU compute via Tier 1 (one-call ops) and Tier 2 (Loom dispatch chains)\n- 445-module stdlib covering AI, ML, media, science, data, GUI, and more\n- Interactive REPL with GPU support\n- AI-assisted code generation via `octoflow chat`\n- Sandboxed execution with granular permission flags\n- Cross-vendor GPU support (NVIDIA tested, AMD/Intel via Vulkan)\n\n### What's in progress\n\n- **LLM inference on consumer GPUs** — Running 24GB models on 6GB GPUs via layer streaming. This is the current focus.\n- OctoPress weight compression (3-tier hot/warm/cold cache)\n- AMD GPU validation\n- Tier 3 JIT SPIR-V stabilization\n\n---\n\n## Looking for Maintainers\n\nThis project needs more humans.\n\nOne developer built it to prove the idea works. The idea works. Now it needs people who want to take it further — not just contributors, but **co-maintainers** who want ownership of parts of the system.\n\n### What's on the table\n\n- **Full open source.** The developer is willing to open-source the entire compiler (Rust source, all 3 modules, the full 966-test suite) once there's a team to develop and sustain it. MIT license, same as everything else.\n- **Compiler access now.** Serious maintainer candidates get private repo access immediately. No hoops.\n- **Architectural input.** The language is small enough (23 concepts) that a new maintainer can genuinely understand the whole system. You won't be lost in a million-line codebase.\n\n### What would help most\n\n| Area | What's needed |\n|------|--------------|\n| **GPU runtime** | Vulkan experience, help with AMD/Intel validation, dispatch optimization |\n| **Language design** | Someone who cares about keeping the language small and learnable |\n| **Stdlib** | Domain experts — ML, audio, scientific computing, data engineering |\n| **Testing** | More hardware, more edge cases, fuzzing, property-based testing |\n| **Documentation** | Tutorials, guides, examples — written for humans, not just LLMs |\n| **Community** | Someone who wants to help people use this thing |\n\n### Why you might want to\n\n- You think GPU compute should be accessible without CUDA\n- You want to work on a language that's small enough to hold in your head\n- You're curious about what happens when AI writes 90% of the code and a human architects 100% of the decisions\n- You believe in building tools that empower users rather than creating dependency\n\nIf any of that resonates: [open an issue](https://github.com/octoflow-lang/octoflow/issues), email, or just start reading the code. The stdlib is right here. The docs explain the architecture. Jump in.\n\n---\n\n## Documentation\n\n| Document | Description |\n|----------|-------------|\n| [Website](https://octoflow-lang.github.io/octoflow/) | Landing page with live demos |\n| [Language Guide](docs/language-guide.md) | Full language reference |\n| [Loom Engine](docs/loom-engine.md) | GPU VM architecture deep-dive |\n| [Stdlib Reference](docs/stdlib.md) | All 445 modules |\n| [GPU Guide](docs/gpu-guide.md) | GPU compute patterns and best practices |\n| [Builtins](docs/builtins.md) | 210+ built-in functions |\n\n---\n\n## Building from Source\n\nThe compiler source is currently in a private repository. If you're interested in building from source or contributing to the compiler, [open an issue](https://github.com/octoflow-lang/octoflow/issues) — the developer will get you access.\n\nThe stdlib and everything in this repo can be explored and modified immediately.\n\n---\n\n## License\n\nMIT. The stdlib, examples, documentation, and everything in this repository.\n\nThe compiler will be MIT too, once it's open-sourced. No license change, no dual licensing, no surprises.\n\n---\n\n\u003cp align=\"center\"\u003e\n  \u003csub\u003eBuilt with AI. Decided by humans. GPU-native from day one.\u003c/sub\u003e\n\u003c/p\u003e\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foctoflow-lang%2Foctoflow","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Foctoflow-lang%2Foctoflow","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foctoflow-lang%2Foctoflow/lists"}