{"id":13571373,"url":"https://github.com/microsoft/archai","last_synced_at":"2025-05-16T08:05:19.145Z","repository":{"id":37051315,"uuid":"245036506","full_name":"microsoft/archai","owner":"microsoft","description":"Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.","archived":false,"fork":false,"pushed_at":"2024-10-23T17:40:42.000Z","size":50692,"stargazers_count":476,"open_issues_count":4,"forks_count":89,"subscribers_count":26,"default_branch":"main","last_synced_at":"2025-05-13T01:18:04.036Z","etag":null,"topics":["automated-machine-learning","automl","darts","deep-learning","hyperparameter-optimization","machine-learning","model-compression","nas","neural-architecture-search","petridish","python","pytorch"],"latest_commit_sha":null,"homepage":"https://microsoft.github.io/archai","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/microsoft.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":"CODEOWNERS","security":"SECURITY.md","support":"docs/support/contact.rst","governance":null,"roadmap":null,"authors":"AUTHORS.md","dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-03-05T00:54:29.000Z","updated_at":"2025-05-03T20:53:54.000Z","dependencies_parsed_at":"2022-07-12T18:22:55.121Z","dependency_job_id":"339c2ede-3f35-4df3-a62a-78acf5bcb78e","html_url":"https://github.com/microsoft/archai","commit_stats":{"total_commits":2542,"total_committers":25,"mean_commits":101.68,"dds":0.6664044059795436,"last_synced_commit":"6bb8ba3864263ec0e1f6795e5ce21cc4c1aac046"},"previous_names":[],"tags_count":16,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2Farchai","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2Farchai/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2Farchai/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2Farchai/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/microsoft","download_url":"https://codeload.github.com/microsoft/archai/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254493378,"owners_count":22080126,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["automated-machine-learning","automl","darts","deep-learning","hyperparameter-optimization","machine-learning","model-compression","nas","neural-architecture-search","petridish","python","pytorch"],"created_at":"2024-08-01T14:01:01.462Z","updated_at":"2025-05-16T08:05:14.128Z","avatar_url":"https://github.com/microsoft.png","language":"Python","readme":"\u003ch1 align=\"center\"\u003e\n   \u003cimg src=\"https://user-images.githubusercontent.com/9354770/171523113-70c7214b-8298-4d7e-abd9-81f5788f6e19.png\" alt=\"Archai logo\" width=\"384px\" /\u003e\n   \u003cbr /\u003e\n\u003c/h1\u003e\n\n\u003cdiv align=\"center\"\u003e\n   \u003cb\u003eArchai\u003c/b\u003e accelerates your Neural Architecture Search (NAS) through \u003cb\u003efast\u003c/b\u003e, \u003cb\u003ereproducible\u003c/b\u003e and \u003cb\u003emodular\u003c/b\u003e research, enabling the generation of efficient deep networks for various applications.\n\u003c/div\u003e\n\n\u003cbr /\u003e\n\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src =\"https://img.shields.io/github/release/microsoft/archai?style=flat-square\" alt=\"Release version\" /\u003e\n\t\u003cimg src =\"https://img.shields.io/github/issues-raw/microsoft/archai?style=flat-square\" alt=\"Open issues\" /\u003e\n\t\u003cimg src =\"https://img.shields.io/github/contributors/microsoft/archai?style=flat-square\" alt=\"Contributors\" /\u003e\n\t\u003cimg src =\"https://img.shields.io/pypi/dm/archai?style=flat-square\" alt=\"PyPI downloads\" /\u003e\n\t\u003cimg src =\"https://img.shields.io/github/license/microsoft/archai?color=red\u0026style=flat-square\" alt=\"License\" /\u003e\n\u003c/div\u003e\n\n\u003cbr /\u003e\n\n\u003cdiv align=\"center\"\u003e\n   \u003ca href=\"#installation\"\u003eInstallation\u003c/a\u003e •\n   \u003ca href=\"#quickstart\"\u003eQuickstart\u003c/a\u003e •\n   \u003ca href=\"#tasks\"\u003eTasks\u003c/a\u003e •\n   \u003ca href=\"#documentation\"\u003eDocumentation\u003c/a\u003e •\n   \u003ca href=\"#support\"\u003eSupport\u003c/a\u003e\n\u003c/div\u003e\n\n## Installation\n\nArchai can be installed through various methods, however, it is recommended to utilize a virtual environment such as `conda` or `pyenv` for optimal results.\n\nTo install Archai via PyPI, the following command can be executed:\n\n```bash\npip install archai\n```\n\n**Archai requires Python 3.8+ and PyTorch 1.7.0+ to function properly.**\n\nFor further information, please consult the [installation guide](https://microsoft.github.io/archai/getting_started/installation.html).\n\n\n## Quickstart\n\nIn this quickstart example, we will apply Archai in Natural Language Processing to find the optimal Pareto-frontier Transformers' configurations according to a set of objectives.\n\n### Creating the Search Space\n\nWe start by importing the `TransformerFlexSearchSpace` class which represents the search space for the Transformer architecture:\n\n```python\nfrom archai.discrete_search.search_spaces.nlp.transformer_flex.search_space import TransformerFlexSearchSpace\n\nspace = TransformerFlexSearchSpace(\"gpt2\")\n```\n\n### Defining Search Objectives\n\nNext, we define the objectives we want to optimize. In this example, we use `NonEmbeddingParamsProxy`, `TransformerFlexOnnxLatency`, and `TransformerFlexOnnxMemory` to define the objectives:\n\n```python\nfrom archai.discrete_search.api.search_objectives import SearchObjectives\nfrom archai.discrete_search.evaluators.nlp.parameters import NonEmbeddingParamsProxy\nfrom archai.discrete_search.evaluators.nlp.transformer_flex_latency import TransformerFlexOnnxLatency\nfrom archai.discrete_search.evaluators.nlp.transformer_flex_memory import TransformerFlexOnnxMemory\n\nsearch_objectives = SearchObjectives()\nsearch_objectives.add_objective(\n   \"non_embedding_params\",\n   NonEmbeddingParamsProxy(),\n   higher_is_better=True,\n   compute_intensive=False,\n   constraint=(1e6, 1e9),\n)\nsearch_objectives.add_objective(\n   \"onnx_latency\",\n   TransformerFlexOnnxLatency(space),\n   higher_is_better=False,\n   compute_intensive=False,\n)\nsearch_objectives.add_objective(\n   \"onnx_memory\",\n   TransformerFlexOnnxMemory(space),\n   higher_is_better=False,\n   compute_intensive=False,\n)\n```\n\n### Initializing the Algorithm\n\nWe use the `EvolutionParetoSearch` algorithm to conduct the search:\n\n```python\nfrom archai.discrete_search.algos.evolution_pareto import EvolutionParetoSearch\n\nalgo = EvolutionParetoSearch(\n   space,\n   search_objectives,\n   None,\n   \"tmp\",\n   num_iters=5,\n   init_num_models=10,\n   seed=1234,\n)\n```\n\n### Performing the Search\n\nFinally, we call the `search()` method to start the NAS process:\n\n```python\nalgo.search()\n```\n\nThe algorithm will iterate through different network architectures, evaluate their performance based on the defined objectives, and ultimately produce a frontier of Pareto-optimal results.\n\n## Tasks\n\nTo demonstrate and showcase the capabilities/functionalities of Archai, a set of end-to-end tasks are provided:\n\n* [Text Generation](https://github.com/microsoft/archai/blob/main/tasks/text_generation).\n* [Face Segmentation](https://github.com/microsoft/archai/blob/main/tasks/face_segmentation).\n\n## Documentation\n\nThe [official documentation](https://microsoft.github.io/archai) also provides a series of [notebooks](https://microsoft.github.io/archai/getting_started/notebooks.html).\n\n## Support\n\nIf you have any questions or feedback about the Archai project or the open problems in Neural Architecture Search, please feel free to contact us using the following information:\n\n* Email: archai@microsoft.com\n* Website: https://github.com/microsoft/archai/issues\n\nWe welcome any questions, feedback, or suggestions you may have and look forward to hearing from you.\n\n### Team\n\nArchai has been created and maintained by [Shital Shah](https://shital.com), [Debadeepta Dey](https://debadeepta.com), [Gustavo de Rosa](https://www.microsoft.com/en-us/research/people/gderosa), Caio Mendes, [Piero Kauffmann](https://www.microsoft.com/en-us/research/people/pkauffmann), [Chris Lovett](https://lovettsoftware.com), Allie Del Giorno, Mojan Javaheripi, and [Ofer Dekel](https://www.microsoft.com/en-us/research/people/oferd) at Microsoft Research.\n\n### Contributions\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.\n\nWhen you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.\n\n### Trademark\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark \u0026 Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.\n\n### License\n\nThis project is released under the MIT License. Please review the [file](https://github.com/microsoft/archai/blob/main/LICENSE) for more details.\n","funding_links":[],"categories":["Python","AutoML","Profiling","Scheduling","Tooling","Tools and projects"],"sub_categories":["Profiling","LLM"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmicrosoft%2Farchai","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmicrosoft%2Farchai","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmicrosoft%2Farchai/lists"}