{"id":13781425,"url":"https://uber.github.io/fiber/","last_synced_at":"2025-05-11T14:35:09.230Z","repository":{"id":40956539,"uuid":"232387380","full_name":"uber/fiber","owner":"uber","description":"Distributed Computing for AI Made Simple","archived":true,"fork":false,"pushed_at":"2023-03-19T22:55:22.000Z","size":8759,"stargazers_count":1044,"open_issues_count":26,"forks_count":111,"subscribers_count":18,"default_branch":"master","last_synced_at":"2025-04-26T23:36:15.531Z","etag":null,"topics":["distributed-computing","machine-learning","multiprocessing","python","sandbox"],"latest_commit_sha":null,"homepage":"https://uber.github.io/fiber/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/uber.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2020-01-07T18:16:24.000Z","updated_at":"2025-04-26T19:08:30.000Z","dependencies_parsed_at":"2024-01-15T20:47:17.808Z","dependency_job_id":"f050e628-8328-40fe-a85d-e1805f5ce783","html_url":"https://github.com/uber/fiber","commit_stats":{"total_commits":65,"total_committers":5,"mean_commits":13.0,"dds":"0.16923076923076918","last_synced_commit":"ad6faf02b8e94dee498990e9fd9c588234666725"},"previous_names":[],"tags_count":2,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/uber%2Ffiber","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/uber%2Ffiber/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/uber%2Ffiber/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/uber%2Ffiber/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/uber","download_url":"https://codeload.github.com/uber/fiber/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253580390,"owners_count":21930934,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["distributed-computing","machine-learning","multiprocessing","python","sandbox"],"created_at":"2024-08-03T18:01:25.880Z","updated_at":"2025-05-11T14:35:04.216Z","avatar_url":"https://github.com/uber.png","language":"Python","readme":"\u003cp align=\"right\"\u003e\n  \u003ca href=\"https://travis-ci.com/uber/fiber\"\u003e\n      \u003cimg src=\"https://travis-ci.com/uber/fiber.svg?token=BxMzxQEDDtTBPG9151kk\u0026branch=master\" alt=\"build\" /\u003e\n  \u003c/a\u003e\n\u003c/p\u003e\n\n\u003cimg src=\"docs/img/fiber_logo.png\" alt=\"drawing\" width=\"550\"/\u003e\n\n[**Project Home**](https://uber.github.io/fiber/) \u0026nbsp;\n[**Blog**](https://eng.uber.com/fiberdistributed/) \u0026nbsp;\n[**Documents**](https://uber.github.io/fiber/getting-started/) \u0026nbsp;\n[**Paper**](https://arxiv.org/abs/2003.11164) \u0026nbsp;\n[**Media Coverage**](https://venturebeat.com/2020/03/26/uber-details-fiber-a-framework-for-distributed-ai-model-training/)\n\n\u003cimg src=\"https://github.com/uber/fiber/raw/docs/email-list/docs/img/new-icon.png\"/\u003e Join Fiber users email list [fiber-users@googlegroups.com](https://groups.google.com/forum/#!forum/fiber-users)\n\n# Fiber\n\n### Distributed Computing for AI Made Simple\n\n*This project is experimental and the APIs are not considered stable.*\n\nFiber is a Python distributed computing library for modern computer clusters.\n\n* It is easy to use. Fiber allows you to write programs that run on a computer cluster level without the need to dive into the details of computer cluster.\n* It is easy to learn. Fiber provides the same API as Python's standard [multiprocessing](https://docs.python.org/3.6/library/multiprocessing.html) library that you are familiar with. If you know how to use multiprocessing, you can program a computer cluster with Fiber.\n* It is fast. Fiber's communication backbone is built on top of [Nanomsg](https://nanomsg.org/) which is a high-performance asynchronous messaging library to allow fast and reliable communication.\n* It doesn't need deployment. You run it as the same way as running a normal application on a computer cluster and Fiber handles the rest for you.\n* It it reliable. Fiber has built-in error handling when you are running a pool of workers. Users can focus on writing the actual application code instead of dealing with crashed workers.\n\nOriginally, it was developed to power large scale parallel scientific computation projects like [POET](https://eng.uber.com/poet-open-ended-deep-learning/) and it has been used to power similar projects within Uber.\n\n\n## Installation\n\n```\npip install fiber\n```\n\nCheck [here](https://uber.github.io/fiber/installation/) for details.\n\n## Quick Start\n\n\n### Hello Fiber\nTo use Fiber, simply import it in your code and it works very similar to multiprocessing.\n\n```python\nimport fiber\n\nif __name__ == '__main__':\n    fiber.Process(target=print, args=('Hello, Fiber!',)).start()\n```\n\nNote that `if __name__ == '__main__':` is necessary because Fiber uses *spawn* method to start new processes. Check [here](https://stackoverflow.com/questions/50781216/in-python-multiprocessing-process-do-we-have-to-use-name-main) for details.\n\nLet's take look at another more complex example:\n\n### Estimating Pi\n\n\n```python\nimport fiber\nimport random\n\n@fiber.meta(cpu=1)\ndef inside(p):\n    x, y = random.random(), random.random()\n    return x * x + y * y \u003c 1\n\ndef main():\n    NUM_SAMPLES = int(1e6)\n    pool = fiber.Pool(processes=4)\n    count = sum(pool.map(inside, range(0, NUM_SAMPLES)))\n    print(\"Pi is roughly {}\".format(4.0 * count / NUM_SAMPLES))\n\nif __name__ == '__main__':\n    main()\n```\n\n\nFiber implements most of multiprocessing's API including `Process`, `SimpleQueue`, `Pool`, `Pipe`, `Manager` and it has its own extension to the multiprocessing's API to make it easy to compose large scale distributed applications. For the detailed API guild, check out [here](https://uber.github.io/fiber/process/).\n\n### Running on a Kubernetes cluster\n\nFiber also has native support for computer clusters. To run the above example on Kubernetes, fiber provided a convenient command line tool to manage the workflow.\n\nAssume you have a working docker environment locally and have finished configuring [Google Cloud SDK](https://cloud.google.com/sdk/docs/quickstarts). Both `gcloud` and `kubectl` are available locally. Then you can start by writing a Dockerfile which describes the running environment.  An example Dockerfile looks like this:\n\n```dockerfile\n# example.docker\nFROM python:3.6-buster\nADD examples/pi_estimation.py /root/pi_estimation.py\nRUN pip install fiber\n```\n**Build an image and launch your job**\n\n```\nfiber run -a python3 /root/pi_estimation.py\n```\n\nThis command will look for local Dockerfile and build a docker image and push it to your Google Container Registry . It then launches the main job which contains your code and runs the command `python3 /root/pi_estimation.py` inside your job. Once the main job is running, it will start 4 subsequent jobs on the cluster and each of them is a Pool worker.\n\n\n## Supported platforms\n\n* Operating system: Linux\n* Python: 3.6+\n* Supported cluster management systems:\n\t* Kubernetes (Tested with Google Kubernetes Engine on Google cloud)\n\nWe are interested in supporting other cluster management systems like [Slurm](https://slurm.schedmd.com/), if you want to contribute to it please let us know.\n\n\nCheck [here](https://uber.github.io/fiber/platforms/) for details.\n\n## Documentation\n\nThe documentation, including method/API references, can be found [here](https://uber.github.io/fiber/getting-started/).\n\n\n## Testing\n\nInstall test dependencies. You'll also need to make sure [docker](https://docs.docker.com/install/) is available on the testing machine.\n\n```bash\n$ pip install -e .[test]\n```\n\nRun tests\n\n```bash\n$ make test\n```\n\n## Contributing\nPlease read our [code of conduct](CODE_OF_CONDUCT.md) before you contribute! You can find details for submitting pull requests in the [CONTRIBUTING.md](CONTRIBUTING.md) file. Issue [template](https://help.github.com/articles/about-issue-and-pull-request-templates/).\n\n## Versioning\nWe document versions and changes in our changelog - see the [CHANGELOG.md](CHANGELOG.md) file for details.\n\n## License\nThis project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.\n\n## Cite Fiber\n\n```\n@misc{zhi2020fiber,\n    title={Fiber: A Platform for Efficient Development and Distributed Training for Reinforcement Learning and Population-Based Methods},\n    author={Jiale Zhi and Rui Wang and Jeff Clune and Kenneth O. Stanley},\n    year={2020},\n    eprint={2003.11164},\n    archivePrefix={arXiv},\n    primaryClass={cs.LG}\n}\n```\n\n## Acknowledgments\n* Special thanks to Piero Molino for designing the logo for Fiber\n","funding_links":[],"categories":["Data","Optimization Tools"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/uber.github.io%2Ffiber%2F","html_url":"https://awesome.ecosyste.ms/projects/uber.github.io%2Ffiber%2F","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/uber.github.io%2Ffiber%2F/lists"}