{"id":13604812,"url":"https://github.com/MachineLearningSystem/Tiresias","last_synced_at":"2025-04-12T02:31:41.313Z","repository":{"id":185461997,"uuid":"495624050","full_name":"MachineLearningSystem/Tiresias","owner":"MachineLearningSystem","description":"A GPU Cluster Manager for Distributed Deep Learning Training","archived":false,"fork":true,"pushed_at":"2020-05-07T01:45:03.000Z","size":76,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"master","last_synced_at":"2024-08-02T19:36:14.639Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":null,"has_issues":false,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":"SymbioticLab/Tiresias","license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/MachineLearningSystem.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2022-05-24T01:08:41.000Z","updated_at":"2022-05-05T17:36:26.000Z","dependencies_parsed_at":"2023-08-02T03:16:42.824Z","dependency_job_id":null,"html_url":"https://github.com/MachineLearningSystem/Tiresias","commit_stats":null,"previous_names":["machinelearningsystem/tiresias"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MachineLearningSystem%2FTiresias","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MachineLearningSystem%2FTiresias/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MachineLearningSystem%2FTiresias/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MachineLearningSystem%2FTiresias/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/MachineLearningSystem","download_url":"https://codeload.github.com/MachineLearningSystem/Tiresias/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":223489658,"owners_count":17153796,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T19:00:51.501Z","updated_at":"2024-11-07T09:31:00.716Z","avatar_url":"https://github.com/MachineLearningSystem.png","language":null,"readme":"Tiresias -- A GPU Cluster Manager for Distributed Deep Learning Training without complete job information\n====\n\nTiresias is a GPU cluster resource manager that aims at minimizing distributed deep learning (DDL) jobs’ completion times with partial or no a priori knowledge. It does not rely on any intermediate DL algorithm states (e.g., training loss values) or framework specifics (e.g., tensors-to-parameter server mapping).\n\nDDL training jobs bring some unique challenges to the cluster manager:\n1. unpredictable training time \n2. over-aggressive job consolidation \n3. all-or-nothing resource allocation\n4. inflexibility in GPU sharing (job preemption and resumption)\n\nTiresias tackles those challenges with the **Discretized-2DAS** (two-dimensional age/attained-service based) scheduler and the model profile-based job placement scheme.\nThe *2DAS* scheduler, which considers both the spatial (GPU requirements) and temporal (job's executed time) aspects of DDL jobs, has two scheduling algorithms (*Discretized 2D-LAS* and *Discretized 2D-Gittins Index*). They can minimize the average JCT with no and partial job knowledge, respectively. \nThe profile-based job placement scheme can appropriately relax the consolidation constraints and maintain the resource (GPU) utilization of cluster without hurting jobs’ performance.\n\nOut testbed experiments and large-scale trace-driven simulations show \nthat Tiresias improves the average JCT by up to 5.5x (2x) over current production solutions (state-of-the-art DDL cluster scheduler), \nand it performs comparably to the solution using perfect knowledge of all job characteristics.\n\nDetailed design and performance are available in our [NSDI'19 paper](https://www.usenix.org/conference/nsdi19/presentation/gu).\n\n\nWhat's in this repository?\n-----------\n\n1. Discrete-time simulator of GPU cluster manager for DL training jobs (with both the job scheduler and placement scheme)\n\n**Coming soon ...**  \n\n2. Network(RDMA)-level message profiler for DL models\n\n3. ...\n\nOthers\n-----------\n1. What's **LAS** (Least-Attained Service) algorithm?  \n    Nuyens, Misja, and Adam Wierman. \"The foreground–background queue: a survey.\" Performance evaluation 65.3-4 (2008): 286-307.\n\n2. What's **Gittins Index** policy?  \n    Gittins, John, Kevin Glazebrook, and Richard Weber. Multi-armed bandit allocation indices. John Wiley \u0026 Sons, 2011.","funding_links":[],"categories":["Paper-Code"],"sub_categories":["GPU Cluster Management"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FMachineLearningSystem%2FTiresias","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FMachineLearningSystem%2FTiresias","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FMachineLearningSystem%2FTiresias/lists"}