{"id":13605175,"url":"https://github.com/MachineLearningSystem/Fluid","last_synced_at":"2025-04-12T02:32:52.999Z","repository":{"id":185461772,"uuid":"593204191","full_name":"MachineLearningSystem/Fluid","owner":"MachineLearningSystem","description":"A Generic Resource-Aware Hyperparameter Tuning Execution Engine","archived":false,"fork":true,"pushed_at":"2022-01-08T22:23:01.000Z","size":299,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2024-11-07T09:44:27.387Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":null,"has_issues":false,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":"SymbioticLab/Fluid","license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/MachineLearningSystem.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2023-01-25T13:36:03.000Z","updated_at":"2022-08-08T03:14:15.000Z","dependencies_parsed_at":null,"dependency_job_id":"42cd32de-1227-4dbd-8cb5-1f7c9f112d1e","html_url":"https://github.com/MachineLearningSystem/Fluid","commit_stats":null,"previous_names":["machinelearningsystem/fluid"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MachineLearningSystem%2FFluid","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MachineLearningSystem%2FFluid/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MachineLearningSystem%2FFluid/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MachineLearningSystem%2FFluid/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/MachineLearningSystem","download_url":"https://codeload.github.com/MachineLearningSystem/Fluid/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248506933,"owners_count":21115510,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T19:00:55.452Z","updated_at":"2025-04-12T02:32:52.535Z","avatar_url":"https://github.com/MachineLearningSystem.png","language":null,"readme":"# Fluid: Resource-Aware Hyperparameter Tuning Engine\n\n[![PyPI version](https://img.shields.io/pypi/v/fluidexec.svg)](https://pypi.org/project/fluidexec)\n[![Python package](https://github.com/SymbioticLab/Fluid/actions/workflows/python-package.yml/badge.svg?event=release)](https://github.com/SymbioticLab/Fluid/actions/workflows/python-package.yml)\n\n`Fluid` is an alternative [Ray](https://ray.io) executor that intelligently manages trial executions on behalf of hyperparameter tuning algorithms, in order to increase the resource utilization, and improve end-to-end makespan.\n\nThis is the implementation of our MLSys'21 [paper](https://symbioticlab.org/publications/#/venue:MLSys) \"Fluid: Resource-Aware Hyperparameter Tuning Engine\".\n\n## Get Started\nFirst follow the [instruction](https://docs.ray.io/en/master/tune/index.html) in Ray Tune to setup the Ray cluster and a tuning environment as usual.\n\nThen make sure [Nvidia MPS](https://docs.nvidia.com/deploy/mps/index.html#topic_6_1) is correctly setup on all worker nodes.\n\n`Fluid` itself is a normal python package that can be installed by `pip install fluidexec`. Note that the pypi package name is `fluidexec` because the name `fluid` is already taken.\n\nTo use `Fluid` in Ray Tune, pass an instance of it as the trial executor to `tune.run`. It should work with any other schedulers:\n\n```python\nfrom fluid.fliud_executor import FluidExecutor\ntune.run(\n    MyTrainable,\n    trial_executor=FluidExecutor(),\n    ...\n)\n```\n\n\n## Reproduce Experiments\nSee the README in [`workloads`](workloads/) for more information.\n\n\n## Notes\n\nPlease consider to cite our paper if you find this useful in your research project.\n\n```bibtex\n@inproceedings{fluid:mlsys21,\n    author    = {Peifeng Yu and Jiachen Liu and Mosharaf Chowdhury},\n    booktitle = {MLSys},\n    title     = {Fluid: Resource-Aware Hyperparameter Tuning Engine},\n    year      = {2021},\n}\n```\n","funding_links":[],"categories":["Paper-Code"],"sub_categories":["Optimization"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FMachineLearningSystem%2FFluid","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FMachineLearningSystem%2FFluid","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FMachineLearningSystem%2FFluid/lists"}