{"id":13604214,"url":"https://github.com/MachineLearningSystem/DAPPLE","last_synced_at":"2025-04-11T23:32:00.943Z","repository":{"id":185461719,"uuid":"558361178","full_name":"MachineLearningSystem/DAPPLE","owner":"MachineLearningSystem","description":"An Efficient Pipelined Data Parallel Approach for Training Large Model","archived":false,"fork":true,"pushed_at":"2020-12-11T03:13:47.000Z","size":1724,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"master","last_synced_at":"2024-11-07T08:42:27.364Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":null,"has_issues":false,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":"AlibabaPAI/DAPPLE","license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/MachineLearningSystem.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2022-10-27T11:56:03.000Z","updated_at":"2022-09-14T05:25:08.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/MachineLearningSystem/DAPPLE","commit_stats":null,"previous_names":["machinelearningsystem/dapple"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MachineLearningSystem%2FDAPPLE","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MachineLearningSystem%2FDAPPLE/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MachineLearningSystem%2FDAPPLE/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MachineLearningSystem%2FDAPPLE/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/MachineLearningSystem","download_url":"https://codeload.github.com/MachineLearningSystem/DAPPLE/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248495053,"owners_count":21113557,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T19:00:41.770Z","updated_at":"2025-04-11T23:31:58.779Z","avatar_url":"https://github.com/MachineLearningSystem.png","language":null,"readme":"# DAPPLE: An Efficient Pipelined Data Parallel Approach for Large Models Training\n\n[![](https://img.shields.io/badge/PyPI-HPGO%200.92-blue?logo=python\u0026style=for-the-badge\u0026logoColor=yellow)](https://pypi.org/project/HPGO/)\n\nDAPPLE is a distributed training framework which combines pipeline parallelism\nand data parallelism to address aforementioned scheduling and planning challenges with synchronous training.\nThis framework features a profiler, a [planner](https://github.com/AlibabaPAI/DAPPLE/tree/master/src)\nand a runtime system.\nThe profiler takes a user’s DNN model as input, and profiles execution time, activation and parameter sizes for each layer.\nSample profiling results for some models are given in [profiling results](https://github.com/AlibabaPAI/DAPPLE/tree/master/profiling_results).\nTaking profiling results as input, DAPPLE planner generates an optimized hybrid parallelization plan on a given global batch size,\nwhich is further split into multiple micro-batches and scheduled for execution by DAPPLE runtime.\n\nThis repository contains the source code implementation of DAPPLE's planning results on\n5 typical models:\n[VGG19](https://github.com/AlibabaPAI/DAPPLE/tree/master/vgg19),\n[AmoebaNet](https://github.com/AlibabaPAI/DAPPLE/tree/master/amoeba_net),\n[BERT](https://github.com/AlibabaPAI/DAPPLE/tree/master/bert),\n[GNMT](https://github.com/AlibabaPAI/DAPPLE/tree/master/gnmt),\nand [XLNET](https://github.com/AlibabaPAI/DAPPLE/tree/master/xlnet).\n\n## Running the DAPPLE experiments\n### DAPPLE Planner\nAll the planner-related experiments can be reproduced on any machine, regardless of the environment. We've provided a detailed how-to in [`PLANNER_REPRODUCTION.md`](PLANNER_REPRODUCTION.md).\n\n### DAPPLE Runtime\nPlease see the launch script `run.sh` for each model for details.\n\n## Using the Planner\n### Install from Python PyPI, as a Python3 package\nPyPI: [https://pypi.org/project/HPGO/](https://pypi.org/project/HPGO/)\n\n```bash\npip3 install HPGO\n```\n\n### Build from source\n```bash\nrustup default nightly\ncargo build --release\nmaturin build --release\npip3 install xxx.whl\n```\n\n### Example Usage of Python API\n```python\n# Import HPGO Python API\nimport HPGO\n# Construct the Conductor object\n# conductor_from_torch_graph_and_seps(profile_filename, profile_batch_size, global_batch_size, devices)\nconductor = HPGO.conductor_from_torch_graph_and_seps(\"./profiling_results/xlnet-36-pbs-1.txt\", 1, 128, [8, 16])\nresult = conductor.py_orchestrate()\nprint(result)\n```\n\n## License\nThe DAPPLE Planner is open sourced under the terms of BSD-3-Clause, details of which can be found in the [`src/LICENSE.md`](src/LICENSE.md) file\n\nThe file [`src/input/torch_graph_py.rs`](src/input/torch_graph_py.rs) contains Python source code from [PipeDream](https://github.com/msr-fiddle/pipedream), which is licensed under the MIT License.\n","funding_links":[],"categories":["Paper-Code"],"sub_categories":["Parallellism Training"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FMachineLearningSystem%2FDAPPLE","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FMachineLearningSystem%2FDAPPLE","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FMachineLearningSystem%2FDAPPLE/lists"}