{"id":13521712,"url":"https://github.com/microsoft/nnfusion","last_synced_at":"2025-04-06T13:08:46.782Z","repository":{"id":39758840,"uuid":"252069995","full_name":"microsoft/nnfusion","owner":"microsoft","description":"A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description. ","archived":false,"fork":false,"pushed_at":"2024-09-19T06:31:57.000Z","size":192460,"stargazers_count":980,"open_issues_count":115,"forks_count":164,"subscribers_count":41,"default_branch":"main","last_synced_at":"2025-03-30T12:06:27.291Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/microsoft.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":"docs/Supported-Models.md","governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-04-01T04:15:38.000Z","updated_at":"2025-03-29T10:35:38.000Z","dependencies_parsed_at":"2024-12-12T19:13:33.710Z","dependency_job_id":null,"html_url":"https://github.com/microsoft/nnfusion","commit_stats":{"total_commits":226,"total_committers":28,"mean_commits":8.071428571428571,"dds":0.7964601769911505,"last_synced_commit":"00c13173a155556e50510575f6682a6a654cd585"},"previous_names":[],"tags_count":4,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2Fnnfusion","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2Fnnfusion/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2Fnnfusion/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2Fnnfusion/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/microsoft","download_url":"https://codeload.github.com/microsoft/nnfusion/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247485283,"owners_count":20946398,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T06:00:37.431Z","updated_at":"2025-04-06T13:08:46.757Z","avatar_url":"https://github.com/microsoft.png","language":"C++","readme":"**NNFusion** is a flexible and efficient DNN compiler that can generate high-performance executables from a DNN model description (e.g., TensorFlow frozen models and ONNX format). With the efficient compiler as core, NNFusion aims to:\n- facilitate full-stack model optimization\n- provide framework-free code generation capability\n- support new accelerator devices as target inferencing devices\n\n## Who should consider using NNFusion?\n- Developers who want to speed up the execution performance of their pre-defined or pre-trained DNN model.\n- Developers who want to deploy their pre-trained model as framework-free source codes with minimum library dependencies.\n- Researchers who want to quickly try new compiler optimization ideas or customize optimizations on some specific models.\n\n### [NNFusion v0.3 has been released!](https://github.com/microsoft/nnfusion/releases/tag/v0.3):raised_hands:\n\n## Highlight features\n- Provide a full-stack optimization mechanism, including:\n  - Data-flow graph optimizations, e.g., CSE, compile-time constant folding, etc.\n  - Model-specific kernel selection, kernel co-scheduling, kernel fusion and auto kernel tuner integration.\n  - Static memory layout and placement optimizations.\n- Provide ahead-of-time and source-to-source (model-to-code) compilation to reduce runtime overhead and remove library/framework dependencies.\n- Support popular DNN model formats including TensorFlow and ONNX as input models.\n- Support customized optimization in an easier and more efficient way, e.g., directly replacing hand-crafted kernels on the generated human-readable code.\n- Support commonly used devices like CUDA GPUs, ROCm GPUs and CPU.\n- Support parallel training via [SuperScaler](https://github.com/microsoft/SuperScaler)\n\n## Get Started\n### Quick Start with Docker Image\nFor end users, simply use docker to compile your model and generate high-performance executable.\n\nNNFusion supports and is well tested on Ubuntu 16.04 and 18.04 with a CUDA GPU equipped. \n\nYou should install nvidia-docker on your device to do the following steps.\n\nWe will use a simple TensorFlow LSTM inference model as an example. You can download a frozen version from our model zoo:\n\n`wget https://nnfusion.blob.core.windows.net/models/tensorflow/frozen_lstm_l8s8h256_bs1.pb`\n\nTo use your own model to get started, please refer to [Supported Models](https://github.com/microsoft/nnfusion/blob/master/models/tensorflow/README.md) to see whether it is supported and freeze your model according to [Freeze Your Model](https://github.com/microsoft/nnfusion/blob/master/docs/Freeze-TensorFlow-Models.md).\n\n1. Pull docker image\n`docker pull nnfusion/cuda:10.2-cudnn7-devel-ubuntu18.04`\n\n2. Run docker container with the given image\n\n```\ndocker run -t --name [YOUR_CONTAINER_NAME] -d nnfusion/cuda:10.2-cudnn7-devel-ubuntu18.04\ndocker start [YOUR_CONTAINER_NAME]\ndocker exec -it [YOUR_CONTAINER_NAME] bash\n```\n3. Put your model in the container\n\nIn host, you can use `docker cp host_path [YOUR_CONTAINER_NAME]:container_path` to copy your model into the container, or use `docker run -t -i -v \u003chost_dir\u003e:\u003ccontainer_dir\u003e` to map the host dir to the container.\n\n4. Compile Model\n\nWhen model is prepared, we can compile model in the container and run it to see the performance.\n```\ncd root\nnnfusion path/[YOUR_MODEL_FILE]\n```\nNote: \nIf you are using an ONNX model, the compile command will be  `nnfusion path/[YOUR_MODEL_FILE] -f onnx`\n\n5. Build and Run Compiled Model\n\n```\ncd root/nnfusion_rt/cuda_codegen\ncmake . \u0026\u0026 make -j\n./main_test\n```\n6. The output of NNFusion should be Tensors with value and model iteration times. Using the example model `frozen_lstm_l8s8h256_bs1.pb`, you will see the output of this model and a summary of performance:\n```\nResult_2261_0:\n8.921492e-03 1.182088e-02 8.937406e-03 7.932204e-03 1.574194e-02 3.844390e-03 -1.505094e-02 -1.112035e-02 5.026608e-03 -8.032205e-03  .. (size = 256, ends with 1.357487e-02);\nResult_2261_0:\n8.921492e-03 1.182088e-02 8.937406e-03 7.932204e-03 1.574194e-02 3.844390e-03 -1.505094e-02 -1.112035e-02 5.026608e-03 -8.032205e-03  .. (size = 256, ends with 1.357487e-02);\n...\nIteration time 2.735200 ms\nIteration time 2.741376 ms\nIteration time 2.733440 ms\nIteration time 2.726528 ms\nIteration time 2.731616 ms\nIteration time 2.736544 ms\nIteration time 2.728576 ms\nIteration time 2.733440 ms\nIteration time 2.732992 ms\nIteration time 2.729536 ms\nIteration time 2.726656 ms\nIteration time 2.732512 ms\nIteration time 2.732032 ms\nIteration time 2.730208 ms\nIteration time 2.732960 ms\nSummary: [min, max, mean] = [2.724704, 2.968352, 2.921987] ms\n```\nFor more detailed information on NNFusion usage, please refer to [NNFusion Usage](https://github.com/microsoft/nnfusion/blob/master/docs/Compile-a-Tensorflow-model-with-NNFusion.md).\n\nFor TensorFlow users, you can refer to [Kernel Tuner Tutorial](https://github.com/microsoft/nnfusion/blob/master/docs/Compile-a-model-with-kernel-tuning-enabled.md) to learn how to compile a TensorFlow model and tune each operator in this model to generate the end-to-end source code.\n\nFor detailed example about training，please refer to [How to use NNFusion Python interface for inference/training](https://github.com/microsoft/nnfusion/tree/master/src/python/example).\n\n### Build from Source Code\nResearchers or contributors who want to do more research on optimizing model compilation, you can build NNFusion from source code.\nTo build from source code, please read the following documents:\n1. Read [Before Started](https://github.com/microsoft/nnfusion/blob/master/docs/Before-Started.md) page to see supported CUDA GPUs and required libs. \n2. Read [Build Guide](https://github.com/microsoft/nnfusion/blob/master/docs/Build-Guide.md) for more information on how to build and install NNFusion in your native system or in the docker container.\n3. After building and installing NNFusion, please refer to [Compile Guide and Tool Usage](https://github.com/microsoft/nnfusion/blob/master/docs/Compile-a-Tensorflow-model-with-NNFusion.md) to learn how to compile or optimize a DNN model.\n\n### Speedups on benchmarks\n\nTo learn how much performance improvement that NNFusion can archive on some typical DNN models, please refer to the [README page](https://github.com/microsoft/nnfusion/blob/osdi20_artifact/artifacts/README.md) at our OSDI'20 artifact branch. \n\n# Contributing\n\nThis project welcomes contributions and suggestions.  Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nTo contribute, please refer to [Contribution Guide](https://github.com/microsoft/nnfusion/blob/master/docs/Contribution-Guide.md) to see more details.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.\n\n# Reference\nPlease cite NNFusion or Rammer in your publications if it helps your research:\n```\n@inproceedings {rammer-osdi20,\nauthor = {Lingxiao Ma and Zhiqiang Xie and Zhi Yang and Jilong Xue and Youshan Miao and Wei Cui and Wenxiang Hu and Fan Yang and Lintao Zhang and Lidong Zhou},\ntitle = {Rammer: Enabling Holistic Deep Learning Compiler Optimizations with rTasks},\nbooktitle = {14th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 20)},\nyear = {2020},\nisbn = {978-1-939133-19-9},\npages = {881--897},\nurl = {https://www.usenix.org/conference/osdi20/presentation/ma},\npublisher = {{USENIX} Association},\nmonth = nov,\n}\n```\n","funding_links":[],"categories":["Open Source Projects","Table of Contents","其他_机器学习与深度学习"],"sub_categories":["AI - Frameworks and Toolkits"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmicrosoft%2Fnnfusion","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmicrosoft%2Fnnfusion","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmicrosoft%2Fnnfusion/lists"}