{"id":46593101,"url":"https://github.com/499602d2/flowboost","last_synced_at":"2026-04-16T10:00:35.264Z","repository":{"id":242414289,"uuid":"770476288","full_name":"499602D2/flowboost","owner":"499602D2","description":"🏄‍♂️ Multi-objective optimization library for OpenFOAM, powered by Ax. Modular, optimizer-agnostic, cluster-native.","archived":false,"fork":false,"pushed_at":"2026-04-14T09:05:49.000Z","size":596,"stargazers_count":7,"open_issues_count":6,"forks_count":1,"subscribers_count":1,"default_branch":"main","last_synced_at":"2026-04-14T10:29:05.239Z","etag":null,"topics":["ax","bayesian-optimization","cluster-computing","hpc","multi-objective-optimization","openfoam","optimization-library"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/499602D2.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2024-03-11T16:08:34.000Z","updated_at":"2026-04-14T08:57:14.000Z","dependencies_parsed_at":"2024-06-02T22:14:29.671Z","dependency_job_id":"834aa748-6d05-4df7-81f2-7e441a01da4f","html_url":"https://github.com/499602D2/flowboost","commit_stats":null,"previous_names":["499602d2/flowboost"],"tags_count":10,"template":false,"template_full_name":null,"purl":"pkg:github/499602D2/flowboost","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/499602D2%2Fflowboost","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/499602D2%2Fflowboost/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/499602D2%2Fflowboost/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/499602D2%2Fflowboost/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/499602D2","download_url":"https://codeload.github.com/499602D2/flowboost/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/499602D2%2Fflowboost/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31880882,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-16T09:23:21.276Z","status":"ssl_error","status_checked_at":"2026-04-16T09:23:15.028Z","response_time":69,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ax","bayesian-optimization","cluster-computing","hpc","multi-objective-optimization","openfoam","optimization-library"],"created_at":"2026-03-07T14:03:21.741Z","updated_at":"2026-04-16T10:00:35.258Z","avatar_url":"https://github.com/499602D2.png","language":"Python","readme":"# 🏄‍♂️ FlowBoost — Multi-objective Bayesian optimization for OpenFOAM\n\n![Python](https://img.shields.io/badge/python-3.10_%7C_3.11_%7C_3.12_%7C_3.13_%7C_3.14-blue)\n\nFlowBoost is a highly configurable and extensible library for handling and optimizing OpenFOAM CFD simulations. It provides ready bindings for state-of-the-art Bayesian optimization using Meta's Ax, powered by PyTorch, and simple interfaces for using any other optimization library.\n\n## Features\n- Easy API syntax (see `examples/`)\n- Ready bindings for [Meta's Ax (Adaptive Experimentation Platform)](https://ax.dev/)\n  - Multi-objective, high-dimensional Bayesian optimization\n  - SAASBO, GPU acceleration\n- Fully hands-off cluster-native job management\n- Simple interfaces for OpenFOAM cases (`flowboost.Case`)\n- Use any optimization backend by implementing a few interfaces\n\n## Examples\nThe `examples/` directory contains code examples for simplified real-world scenarios:\n\n1. `aerofoilNACA0012Steady`: parameter optimization for a NACA 0012 aerofoil steady-state simulation\n2. `pitzDaily`: backward-facing step optimization using local Docker execution and the Pandas data backend\n\nBy default, FlowBoost uses Ax's [Service API](https://ax.dev/) as its optimization backend. In practice, any optimizer can be used, as long as it conforms to the abstract `flowboost.optimizer.Backend` base class, which the backend interfaces in `flowboost.optimizer.interfaces` implement.\n\n## OpenFOAM case abstraction\nWorking with OpenFOAM cases is performed through the `flowboost.Case` abstraction, which provides a high-level API for OpenFOAM case-data and configuration access. The `Case` abstraction can be used as-is outside of optimization workflows:\n\n```python\nfrom flowboost import Case\n\n# Clone tutorial to current working directory (or a specified dir)\ntutorial_case = Case.from_tutorial(\"fluid/aerofoilNACA0012Steady\")\n\n# Dictionary read/write access\ncontrol_dict = tutorial_case.dictionary(\"system/controlDict\")\ncontrol_dict.entry(\"writeInterval\").set(\"5000\")\n\n# Access data in an evaluated case\ncase = Case(\"my/case/path\")\ndf = case.data.simple_function_object_reader(\"forceCoeffsCompressible\")\n```\n\n## Installation\n\nFlowBoost requires Python 3.10 or later.\n\nIt is **highly** recommended that you use a virtual environment (\"venv\") when using FlowBoost. For this, [uv](https://github.com/astral-sh/uv) is the recommended choice, but virtualenv, Poetry, and others will work just fine, too.\n\nTo set up a virtual environment using uv:\n\n```shell\nmkdir my-research-dir\ncd my-research-dir\n\nuv sync --python=3.13 # or your desired Python version (\u003e=3.10)\nuv add flowboost # add FlowBoost to the uv-managed venv as a dependency\n```\n\nNext, either source the environment manually using `source .venv/bin/activate`, or run your script using `uv run my_experiment.py`.\n\n### uv (recommended)\n\nTo add FlowBoost to an existing Python environment:\n\n```shell\nuv add flowboost\n```\n\n### pip\n\n```shell\npip install flowboost\n```\n\n### CPU compatibility\nIn order to use the standard `polars` package, your CPU should support AVX2 instructions ([and other SIMD instructions](https://github.com/pola-rs/polars/blob/78dc62851a13b87dc751a627e1e96ba1bf1549ee/py-polars/polars/_cpu_check.py)). These are typically available in Intel Broadwell/4000-series and later, and all AMD Zen-based CPUs.\n\nIf your CPU is from 2012 or earlier, you will most likely receive an illegal instruction error. This can be solved by installing the `lts-cpu` extra:\n\n```shell\nuv add flowboost[lts-cpu]\n# or: pip install flowboost[lts-cpu]\n```\n\nThis installs `polars-lts-cpu`, which is functionally identical but not as performant.\n\n## OpenFOAM\n\nFlowBoost uses OpenFOAM in two ways:\n\n1. **Case setup** uses CLI tools like `foamDictionary` and `foamCloneCase` on the host machine.\n2. **Simulations** run wherever the `Manager` sends them: locally (`Local`, `DockerLocal`) or on a cluster (`SGE`, `Slurm`).\n\nThe host always needs access to OpenFOAM CLI tools for case setup, even when simulations run elsewhere. On Linux, a native install works. On macOS and Windows, FlowBoost provides these tools transparently through Docker.\n\n- **Linux**: native OpenFOAM or Docker\n- **macOS**: Docker ([OrbStack](https://orbstack.dev/) recommended, Docker Desktop also works)\n- **Windows**: Docker (Docker Desktop). Not tested on Windows.\n\nOn first run in Docker mode, FlowBoost builds the `flowboost/openfoam:13` image from the bundled Dockerfile. This is a one-time operation. To force a specific mode, set `FLOWBOOST_FOAM_MODE` to `native` or `docker`. To use a custom image, set `FLOWBOOST_FOAM_IMAGE`.\n\nFlowBoost uses the [openfoam.org](https://openfoam.org/) lineage (not the ESI/openfoam.com fork) and has been tested with versions 11 and 13. The bundled Dockerfile targets OpenFOAM 13 on Ubuntu 24.04. Each OpenFOAM release is tied to a specific Ubuntu LTS, so Dockerfiles are per-version by design.\n\n### Using Docker mode\n\nIn Docker mode, CLI tools like `foamDictionary` run inside a persistent container. This is automatic: `Case`, `Dictionary`, and other abstractions work the same way regardless of the mode (native, Docker).\n\nWhen running multiple OpenFOAM commands (e.g. reading dictionaries across many cases), use the `container()` context manager to keep a single container alive for the entire block:\n\n```python\nfrom flowboost import Case, foam_runtime\n\nworkdir = Path(\"flowboost_data\")\n\nwith foam_runtime().container(workdir):\n    for case_dir in sorted(workdir.glob(\"case_*\")):\n        case = Case(case_dir)\n        k = case.dictionary(\"0/k\").entry(\"boundaryField/inlet/value\").value\n        # All foamDictionary calls reuse the same container\n```\n\nWithout `container()`, FlowBoost auto-mounts paths as needed, which may restart the container when new paths are encountered. Pre-mounting a parent directory (like the workdir above) avoids this.\n\nTo run simulations locally in Docker, use the `DockerLocal` manager:\n\n```python\nfrom flowboost import Manager\n\nmanager = Manager.create(scheduler=\"dockerlocal\", wdir=data_dir, job_limit=2)\n```\n\nEach submitted case gets its own detached container with the case directory bind-mounted. See the `pitzDaily` example for a complete Docker-based workflow.\n\n## GPU acceleration\n\nIf your environment has a CUDA-compatible NVIDIA GPU, verify you have a recent CUDA Toolkit release. Otherwise, GPU acceleration for PyTorch will not be available. This is especially critical if you are using SAASBO for high-dimensional optimization tasks (≥20 dimensions).\n\n```shell\nnvcc -V\n\n# Verify CUDA availability\npython3 -c \"import torch; print(torch.cuda.is_available())\"\n```\n\n## Development\n\nSee [CONTRIBUTING.md](CONTRIBUTING.md) for setup, tooling, and testing guidance.\n\n## Acknowledgments\nThe base functionality for FlowBoost was created as part of a mechanical engineering master's thesis at Aalto University, funded by Wärtsilä. Wärtsilä designs and manufactures marine combustion engines and energy solutions in Vaasa, Finland.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2F499602d2%2Fflowboost","html_url":"https://awesome.ecosyste.ms/projects/github.com%2F499602d2%2Fflowboost","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2F499602d2%2Fflowboost/lists"}