{"id":15046974,"url":"https://github.com/microsoft/moonlit","last_synced_at":"2025-04-07T05:13:31.620Z","repository":{"id":183918780,"uuid":"645614612","full_name":"microsoft/Moonlit","owner":"microsoft","description":"This is a collection of our research on efficient AI, covering hardware-aware NAS and model compression.","archived":false,"fork":false,"pushed_at":"2024-10-25T23:47:44.000Z","size":12613,"stargazers_count":81,"open_issues_count":7,"forks_count":7,"subscribers_count":5,"default_branch":"main","last_synced_at":"2025-04-07T05:13:21.273Z","etag":null,"topics":["inference-efficiency","model-compression","neural-architecture-search","token-pruning"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/microsoft.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":"SUPPORT.md","governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-05-26T03:49:08.000Z","updated_at":"2025-04-04T01:58:21.000Z","dependencies_parsed_at":"2024-09-17T09:23:02.401Z","dependency_job_id":null,"html_url":"https://github.com/microsoft/Moonlit","commit_stats":null,"previous_names":["microsoft/moonlit"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2FMoonlit","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2FMoonlit/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2FMoonlit/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2FMoonlit/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/microsoft","download_url":"https://codeload.github.com/microsoft/Moonlit/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247595335,"owners_count":20963943,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["inference-efficiency","model-compression","neural-architecture-search","token-pruning"],"created_at":"2024-09-24T20:53:49.585Z","updated_at":"2025-04-07T05:13:31.599Z","avatar_url":"https://github.com/microsoft.png","language":"Python","readme":"# Moonlit: Research for enhancing AI models' efficiency and performance.\n\n**Moonlit** is a collection of our model compression work for efficient AI.\n\n\u003e [**ToP**](./ToP) (```@KDD'23```): [**Constraint-aware and Ranking-distilled Token Pruning for Efficient Transformer Inference**](https://arxiv.org/abs/2306.14393)\n\u003e\u003e**ToP** is a constraint-aware and ranking-distilled token pruning method, which selectively removes unnecessary tokens as input sequence pass through layers, allowing the model to improve online inference speed while preserving accuracy.\n\u003e \n\u003e [**SpaceEvo**](./SpaceEvo) (```@ICCV'23```): [**SpaceEvo: Hardware-Friendly Search Space Design for Efficient INT8 Inference**](https://arxiv.org/abs/2303.08308)\n\u003e\u003e**SpaceEvo** is an automatic method for designing a dedicated, quantization-friendly search space for target hardware. This work is featured on Microsoft Research blog: [Efficient and hardware-friendly neural architecture search with SpaceEvo](https://www.microsoft.com/en-us/research/blog/efficient-and-hardware-friendly-neural-architecture-search-with-spaceevo/)\n\u003e \n\u003e [**ElasticViT**](./ElasticViT) (```@ICCV'23```): [**ElasticViT: Conflict-aware Supernet Training for Deploying Fast Vision Transformer on Diverse Mobile Devices**](https://arxiv.org/abs/2303.09730)\n\u003e\u003e**ElasticViT** is a two-stage NAS approach that trains a high-quality ViT supernet over a very large search space for covering a wide range of mobile devices, and then searches an optimal sub-network (subnet) for direct deployment. \n\u003e\n\u003e [**LitePred**](./LitePred/) (```@NSDI'24```): [**LitePred: Transferable and Scalable Latency Prediction for Hardware-Aware Neural Architecture Search**]()\n\u003e\u003e**LitePred** is a lightweight transferrable approach for accurately predicting DNN inference latency. Instead of training a latency predictor from scratch, LitePred is the first to transfer pre-existing latency predictors and achieve accurate prediction on new edge platforms with a profiling cost of less than 1 hour. \n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmicrosoft%2Fmoonlit","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmicrosoft%2Fmoonlit","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmicrosoft%2Fmoonlit/lists"}