{"id":44868211,"url":"https://github.com/benoitc/hornbeam","last_synced_at":"2026-02-25T05:20:48.281Z","repository":{"id":338698957,"uuid":"1158776134","full_name":"benoitc/hornbeam","owner":"benoitc","description":"WSGI/ASGI HTTP server powered by the BEAM. Mix the best of Python (AI, web apps) with Erlang (distribution, concurrency, resilience).","archived":false,"fork":false,"pushed_at":"2026-02-17T14:26:30.000Z","size":785,"stargazers_count":7,"open_issues_count":0,"forks_count":1,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-02-17T15:19:38.796Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Erlang","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/benoitc.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-02-15T22:25:57.000Z","updated_at":"2026-02-17T14:46:59.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/benoitc/hornbeam","commit_stats":null,"previous_names":["benoitc/hornbeam"],"tags_count":2,"template":false,"template_full_name":null,"purl":"pkg:github/benoitc/hornbeam","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/benoitc%2Fhornbeam","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/benoitc%2Fhornbeam/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/benoitc%2Fhornbeam/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/benoitc%2Fhornbeam/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/benoitc","download_url":"https://codeload.github.com/benoitc/hornbeam/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/benoitc%2Fhornbeam/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29735336,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-22T20:09:16.275Z","status":"online","status_checked_at":"2026-02-23T02:00:06.784Z","response_time":90,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2026-02-17T12:03:13.933Z","updated_at":"2026-02-23T02:12:25.309Z","avatar_url":"https://github.com/benoitc.png","language":"Erlang","readme":"# Hornbeam\n\n**Hornbeam** is an Erlang-based WSGI/ASGI server that combines Python's web and ML capabilities with Erlang's strengths:\n\n- **Python handles**: Web apps (WSGI/ASGI), ML models, data processing\n- **Erlang handles**: Scaling (millions of connections), concurrency (no GIL), distribution (cluster RPC), fault tolerance, shared state (ETS)\n\nThe name combines \"horn\" (unicorn, like gunicorn) with \"BEAM\" (Erlang VM).\n\n## Features\n\n- **WSGI Support**: Run standard WSGI Python applications\n- **ASGI Support**: Run async ASGI Python applications (FastAPI, Starlette, etc.)\n- **WebSocket**: Full WebSocket support for real-time apps\n- **HTTP/2**: Via Cowboy, with multiplexing and server push\n- **Shared State**: ETS-backed state accessible from Python (concurrent-safe)\n- **Distributed RPC**: Call functions on remote Erlang nodes\n- **Pub/Sub**: pg-based publish/subscribe messaging\n- **ML Integration**: Cache ML inference results in ETS\n- **Lifespan**: ASGI lifespan protocol for app startup/shutdown\n- **Hot Reload**: Leverage Erlang's hot code reloading\n\n## Quick Start\n\n```erlang\n%% Start with a WSGI application\nhornbeam:start(\"myapp:application\").\n\n%% Start ASGI app (FastAPI, Starlette, etc.)\nhornbeam:start(\"main:app\", #{worker_class =\u003e asgi}).\n\n%% With all options\nhornbeam:start(\"myapp:application\", #{\n    bind =\u003e \"0.0.0.0:8000\",\n    workers =\u003e 4,\n    worker_class =\u003e asgi,\n    lifespan =\u003e auto\n}).\n```\n\n## Installation\n\nAdd hornbeam to your `rebar.config`:\n\n```erlang\n{deps, [\n    {hornbeam, {git, \"https://github.com/benoitc/hornbeam.git\", {branch, \"main\"}}}\n]}.\n```\n\n## Python Integration\n\n### Shared State (ETS)\n\nPython apps can use Erlang ETS for high-concurrency shared state:\n\n```python\nfrom hornbeam_erlang import state_get, state_set, state_incr\n\ndef application(environ, start_response):\n    # Atomic counter (millions of concurrent increments)\n    views = state_incr(f'views:{path}')\n\n    # Get/set cached data\n    data = state_get('my_key')\n    if data is None:\n        data = compute_expensive()\n        state_set('my_key', data)\n\n    start_response('200 OK', [('Content-Type', 'text/plain')])\n    return [f'Views: {views}'.encode()]\n```\n\n### Distributed RPC\n\nCall functions on remote Erlang nodes:\n\n```python\nfrom hornbeam_erlang import rpc_call, nodes\n\ndef application(environ, start_response):\n    # Get connected nodes\n    connected = nodes()\n\n    # Call ML model on GPU node\n    result = rpc_call(\n        'gpu@ml-server',      # Remote node\n        'ml_model',           # Module\n        'predict',            # Function\n        [data],               # Args\n        timeout_ms=30000\n    )\n\n    start_response('200 OK', [('Content-Type', 'application/json')])\n    return [json.dumps(result).encode()]\n```\n\n### ML Caching\n\nUse ETS to cache ML inference results:\n\n```python\nfrom hornbeam_ml import cached_inference, cache_stats\n\ndef application(environ, start_response):\n    # Automatically cached by input hash\n    embedding = cached_inference(model.encode, text)\n\n    # Check cache stats\n    stats = cache_stats()  # {'hits': 100, 'misses': 10, 'hit_rate': 0.91}\n\n    start_response('200 OK', [('Content-Type', 'application/json')])\n    return [json.dumps({'embedding': embedding}).encode()]\n```\n\n### Pub/Sub Messaging\n\n```python\nfrom hornbeam_erlang import publish\n\ndef application(environ, start_response):\n    # Publish to topic (all subscribers notified)\n    count = publish('updates', {'type': 'new_item', 'id': 123})\n\n    start_response('200 OK', [('Content-Type', 'application/json')])\n    return [json.dumps({'subscribers_notified': count}).encode()]\n```\n\n## Examples\n\n### Hello World (WSGI)\n\n```python\n# examples/hello_wsgi/app.py\ndef application(environ, start_response):\n    start_response('200 OK', [('Content-Type', 'text/plain')])\n    return [b'Hello from Hornbeam!']\n```\n\n```erlang\nhornbeam:start(\"app:application\", #{pythonpath =\u003e [\"examples/hello_wsgi\"]}).\n```\n\n### Hello World (ASGI)\n\n```python\n# examples/hello_asgi/app.py\nasync def application(scope, receive, send):\n    await send({\n        'type': 'http.response.start',\n        'status': 200,\n        'headers': [[b'content-type', b'text/plain']],\n    })\n    await send({\n        'type': 'http.response.body',\n        'body': b'Hello from Hornbeam ASGI!',\n    })\n```\n\n```erlang\nhornbeam:start(\"app:application\", #{\n    worker_class =\u003e asgi,\n    pythonpath =\u003e [\"examples/hello_asgi\"]\n}).\n```\n\n### WebSocket Chat\n\n```python\n# examples/websocket_chat/app.py\nasync def app(scope, receive, send):\n    if scope['type'] == 'websocket':\n        await send({'type': 'websocket.accept'})\n\n        while True:\n            message = await receive()\n            if message['type'] == 'websocket.disconnect':\n                break\n            if message['type'] == 'websocket.receive':\n                # Echo back\n                await send({\n                    'type': 'websocket.send',\n                    'text': message.get('text', '')\n                })\n```\n\n```erlang\nhornbeam:start(\"app:app\", #{\n    worker_class =\u003e asgi,\n    pythonpath =\u003e [\"examples/websocket_chat\"]\n}).\n```\n\n### Embedding Service with ETS Caching\n\nSee `examples/embedding_service/` for a complete ML embedding service using Erlang ETS for caching.\n\n### Distributed ML Inference\n\nSee `examples/distributed_rpc/` for distributing ML inference across a cluster.\n\n## Running with Gunicorn (for comparison)\n\nAll examples are designed to work with gunicorn too (with fallback functions):\n\n```bash\n# With gunicorn (single process, no Erlang features)\ncd examples/hello_wsgi\ngunicorn app:application\n\n# With hornbeam (Erlang concurrency, shared state, distribution)\nrebar3 shell\n\u003e hornbeam:start(\"app:application\", #{pythonpath =\u003e [\"examples/hello_wsgi\"]}).\n```\n\n## Configuration\n\n### Via hornbeam:start/2\n\n```erlang\nhornbeam:start(\"myapp:application\", #{\n    %% Server\n    bind =\u003e \u003c\u003c\"0.0.0.0:8000\"\u003e\u003e,\n    ssl =\u003e false,\n    certfile =\u003e undefined,\n    keyfile =\u003e undefined,\n\n    %% Protocol\n    worker_class =\u003e wsgi,  % wsgi | asgi\n    http_version =\u003e ['HTTP/1.1', 'HTTP/2'],\n\n    %% Workers\n    workers =\u003e 4,\n    timeout =\u003e 30000,\n    keepalive =\u003e 2,\n    max_requests =\u003e 1000,\n\n    %% ASGI\n    lifespan =\u003e auto,  % auto | on | off\n\n    %% WebSocket\n    websocket_timeout =\u003e 60000,\n    websocket_max_frame_size =\u003e 16777216,  % 16MB\n\n    %% Python\n    pythonpath =\u003e [\u003c\u003c\".\"\u003e\u003e]\n}).\n```\n\n### Via sys.config\n\n```erlang\n[\n    {hornbeam, [\n        {bind, \"127.0.0.1:8000\"},\n        {workers, 4},\n        {worker_class, wsgi},\n        {timeout, 30000},\n        {pythonpath, [\".\"]}\n    ]}\n].\n```\n\n## API Reference\n\n### hornbeam module\n\n| Function | Description |\n|----------|-------------|\n| `start(AppSpec)` | Start server with WSGI/ASGI app |\n| `start(AppSpec, Options)` | Start server with options |\n| `stop()` | Stop the server |\n| `register_function(Name, Fun)` | Register Erlang function callable from Python |\n| `register_function(Name, Module, Function)` | Register module:function |\n| `unregister_function(Name)` | Unregister a function |\n\n### Python hornbeam_erlang module\n\n| Function | Description |\n|----------|-------------|\n| `state_get(key)` | Get value from ETS (None if not found) |\n| `state_set(key, value)` | Set value in ETS |\n| `state_incr(key, delta=1)` | Atomically increment counter, return new value |\n| `state_decr(key, delta=1)` | Atomically decrement counter |\n| `state_delete(key)` | Delete key from ETS |\n| `state_get_multi(keys)` | Batch get multiple keys |\n| `state_keys(prefix=None)` | Get all keys, optionally by prefix |\n| `rpc_call(node, module, function, args, timeout_ms)` | Call function on remote node |\n| `rpc_cast(node, module, function, args)` | Async call (fire and forget) |\n| `nodes()` | Get list of connected Erlang nodes |\n| `node()` | Get this node's name |\n| `publish(topic, message)` | Publish to pub/sub topic |\n| `call(name, *args)` | Call registered Erlang function |\n| `cast(name, *args)` | Async call to registered function |\n\n### Python hornbeam_ml module\n\n| Function | Description |\n|----------|-------------|\n| `cached_inference(fn, input, cache_key=None, cache_prefix=\"ml\")` | Run inference with ETS caching |\n| `cache_stats()` | Get cache hit/miss statistics |\n\n## Performance\n\nHornbeam achieves high throughput by leveraging Erlang's lightweight process model and avoiding Python's GIL limitations.\n\n### Benchmark Results\n\nTested on Apple M4 Pro, Python 3.13, OTP 28 (February 2026):\n\n| Test | Hornbeam | Gunicorn (gthread) | Speedup |\n|------|----------|--------------------|---------|\n| Simple (100 concurrent) | **33,643** req/s | 3,661 req/s | **9.2x** |\n| High concurrency (500 concurrent) | **28,890** req/s | 3,631 req/s | **8.0x** |\n| Large response (64KB) | **29,118** req/s | 3,599 req/s | **8.1x** |\n\nBoth servers configured with 4 workers, gunicorn with gthread and 4 threads per worker. Zero failed requests on both.\n\n### Latency Comparison\n\n| Test | Hornbeam | Gunicorn |\n|------|----------|----------|\n| Simple (100 concurrent) | **2.97ms** | 27.3ms |\n| High concurrency (500 concurrent) | **17.3ms** | 137.7ms |\n| Large response (64KB) | **1.72ms** | 13.9ms |\n\n### Run Your Own Benchmarks\n\n```bash\n# Quick benchmark\n./benchmarks/quick_bench.sh\n\n# Full benchmark suite\npython benchmarks/run_benchmark.py\n\n# Compare with gunicorn\npython benchmarks/compare_servers.py\n```\n\nSee the [Benchmarking Guide](https://hornbeam.dev/docs/guides/benchmarking) for details.\n\n## Development\n\n```bash\n# Compile\nrebar3 compile\n\n# Run tests\nrebar3 ct\n\n# Start shell\nrebar3 shell\n```\n\n## License\n\nApache License 2.0\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbenoitc%2Fhornbeam","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbenoitc%2Fhornbeam","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbenoitc%2Fhornbeam/lists"}