{"id":13529028,"url":"https://github.com/sony/nnabla","last_synced_at":"2025-05-12T15:30:03.113Z","repository":{"id":37686736,"uuid":"95395370","full_name":"sony/nnabla","owner":"sony","description":"Neural Network Libraries","archived":false,"fork":false,"pushed_at":"2024-11-15T23:43:58.000Z","size":125762,"stargazers_count":2746,"open_issues_count":36,"forks_count":332,"subscribers_count":152,"default_branch":"master","last_synced_at":"2025-04-08T21:17:19.862Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://nnabla.org/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/sony.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2017-06-26T01:07:10.000Z","updated_at":"2025-03-28T02:13:01.000Z","dependencies_parsed_at":"2022-07-09T03:16:19.383Z","dependency_job_id":"e417c278-52f0-4753-a370-badde6f2de1c","html_url":"https://github.com/sony/nnabla","commit_stats":{"total_commits":2406,"total_committers":89,"mean_commits":27.03370786516854,"dds":0.8474646716541978,"last_synced_commit":"e012f71467369e282d3f4e56dfc7af7144f0cef7"},"previous_names":[],"tags_count":110,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sony%2Fnnabla","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sony%2Fnnabla/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sony%2Fnnabla/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sony%2Fnnabla/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/sony","download_url":"https://codeload.github.com/sony/nnabla/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":250478093,"owners_count":21437106,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T07:00:31.736Z","updated_at":"2025-04-23T17:22:31.208Z","avatar_url":"https://github.com/sony.png","language":"Python","readme":"# Neural Network Libraries\n\n[Neural Network Libraries](https://arxiv.org/abs/2102.06725) is a deep learning framework that is intended to be used for research,\ndevelopment and production. We aim to have it running everywhere: desktop PCs, HPC\nclusters, embedded devices and production servers.\n\n\n* [Neural Network Libraries - CUDA extension](https://github.com/sony/nnabla-ext-cuda): An extension library of Neural Network Libraries that allows users to speed-up the computation on CUDA-capable GPUs.\n* [Neural Network Libraries - Examples](https://github.com/sony/nnabla-examples): Working examples of Neural Network Libraries from basic to state-of-the-art.\n* [Neural Network Libraries - C Runtime](https://github.com/sony/nnabla-c-runtime):  Runtime library for inference Neural Network created by Neural Network Libraries.\n* [Neural Network Libraries - NAS](https://github.com/sony/nnabla-nas):  Hardware-aware Neural Architecture Search (NAS) for Neural Network Libraries.\n* [Neural Network Libraries - RL](https://github.com/sony/nnabla-rl):  Deep Reinforcement Learning (RL) library built on top of Neural Network Libraries.\n* [Neural Network Console](https://dl.sony.com/): A Windows GUI app for neural network development.\n\n\n## Installation\n\nInstalling Neural Network Libraries is easy:\n\n```\npip install nnabla\n```\n\nThis installs the CPU version of Neural Network Libraries. GPU-acceleration can be added by installing the CUDA extension with following command.\n```\npip install nnabla-ext-cuda116\n```\nAbove command is for version 11.6 CUDA Toolkit.\n\nThe other supported CUDA packages are listed [here](https://nnabla.readthedocs.io/en/latest/python/pip_installation_cuda.html#cuda-vs-cudnn-compatibility).\n\nCUDA ver.10.x, ver.9.x, ver.8.x are not supported now.\n\n\nFor more details, see the [installation section](http://nnabla.readthedocs.io/en/latest/python/installation.html) of the documentation.\n\n### Building from Source\n\nSee [Build Manuals](doc/build/README.md).\n\n### Running on Docker\nFor details on running on Docker, see the [installation section](http://nnabla.readthedocs.io/en/latest/python/installation.html) of the documentation.\n\n## Features\n\n### Easy, flexible and expressive\n\nThe Python API built on the Neural Network Libraries C++11 core gives you flexibility and\nproductivity. For example, a two layer neural network with classification loss\ncan be defined in the following 5 lines of codes (hyper parameters are enclosed\nby `\u003c\u003e`).\n\n```python\nimport nnabla as nn\nimport nnabla.functions as F\nimport nnabla.parametric_functions as PF\n\nx = nn.Variable(\u003cinput_shape\u003e)\nt = nn.Variable(\u003ctarget_shape\u003e)\nh = F.tanh(PF.affine(x, \u003chidden_size\u003e, name='affine1'))\ny = PF.affine(h, \u003ctarget_size\u003e, name='affine2')\nloss = F.mean(F.softmax_cross_entropy(y, t))\n```\n\nTraining can be done by:\n\n```python\nimport nnabla.solvers as S\n\n# Create a solver (parameter updater)\nsolver = S.Adam(\u003csolver_params\u003e)\nsolver.set_parameters(nn.get_parameters())\n\n# Training iteration\nfor n in range(\u003cnum_training_iterations\u003e):\n    # Setting data from any data source\n    x.d = \u003cset data\u003e\n    t.d = \u003cset label\u003e\n    # Initialize gradients\n    solver.zero_grad()\n    # Forward and backward execution\n    loss.forward()\n    loss.backward()\n    # Update parameters by computed gradients\n    solver.update()\n```\n\nThe dynamic computation graph enables flexible runtime network construction.\nNeural Network Libraries can use both paradigms of static and dynamic graphs,\nboth using the same API.\n\n```python\nx.d = \u003cset data\u003e\nt.d = \u003cset label\u003e\ndrop_depth = np.random.rand(\u003cnum_stochastic_layers\u003e) \u003c \u003clayer_drop_ratio\u003e\nwith nn.auto_forward():\n    h = F.relu(PF.convolution(x, \u003chidden_size\u003e, (3, 3), pad=(1, 1), name='conv0'))\n    for i in range(\u003cnum_stochastic_layers\u003e):\n        if drop_depth[i]:\n            continue  # Stochastically drop a layer\n        h2 = F.relu(PF.convolution(x, \u003chidden_size\u003e, (3, 3), pad=(1, 1), \n                                   name='conv%d' % (i + 1)))\n        h = F.add2(h, h2)\n    y = PF.affine(h, \u003ctarget_size\u003e, name='classification')\n    loss = F.mean(F.softmax_cross_entropy(y, t))\n# Backward computation (can also be done in dynamically executed graph)\nloss.backward()\n```\n\nYou can differentiate to any order with nn.grad.\n\n```python\nimport nnabla as nn\nimport nnabla.functions as F\nimport numpy as np\n\nx = nn.Variable.from_numpy_array(np.random.randn(2, 2)).apply(need_grad=True)\nx.grad.zero()\ny = F.sin(x)\ndef grad(y, x, n=1):\n    dx = [y]\n    for _ in range(n):\n        dx = nn.grad([dx[0]], [x])\n    return dx[0]\ndnx = grad(y, x, n=10)\ndnx.forward()\nprint(np.allclose(-np.sin(x.d), dnx.d))\ndnx.backward()\nprint(np.allclose(-np.cos(x.d), x.g))\n\n# Show the registry status\nfrom nnabla.backward_functions import show_registry\nshow_registry()\n```\n\n### Command line utility\n\nNeural Network Libraries provides a command line utility `nnabla_cli` for easier use of NNL.\n\n`nnabla_cli` provides following functionality.\n\n- Training, Evaluation or Inference with NNP file.\n- Dataset and Parameter manipulation.\n- File format converter\n  - From ONNX to NNP and NNP to ONNX.\n  - From TensorFlow to NNP and NNP to TensorFlow.\n  - From NNP to TFLite.\n  - From ONNX or NNP to NNB or C source code.\n\nFor more details see [Documentation](doc/python/command_line_interface.rst)\n\n\n### Portable and multi-platform\n\n* Python API can be used on Linux and Windows\n* Most of the library code is written in C++14, deployable to embedded devices\n\n### Extensible\n\n* Easy to add new modules like neural network operators and optimizers\n* The library allows developers to add specialized implementations (e.g., for\n  FPGA, ...). For example, we provide CUDA backend as an extension, which gives\n  speed-up by GPU accelerated computation.\n\n### Efficient\n\n* High speed on a single CUDA GPU\n* Memory optimization engine\n* Multiple GPU support\n\n\n## Documentation\n\n\u003chttps://nnabla.readthedocs.org\u003e\n\n### Getting started\n\n* A number of Jupyter notebook tutorials can be found in the [tutorial](https://github.com/sony/nnabla/tree/master/tutorial) folder.\n  We recommend starting from `by_examples.ipynb` for a first\n  working example in Neural Network Libraries and `python_api.ipynb` for an introduction into the\n  Neural Network Libraries API.\n\n* We also provide some more sophisticated examples at [`nnabla-examples`](https://github.com/sony/nnabla-examples) repository.\n\n* C++ API examples are available in [`examples/cpp`](https://github.com/sony/nnabla/tree/master/examples/cpp).\n\n\n## Contribution guide\n\nThe technology is rapidly progressing, and researchers and developers often want to add their custom features to a deep learning framework.\nNNabla is really nice in this point. The architecture of Neural Network Libraries is clean and quite simple.\nAlso, you can add new features very easy by the help of our code template generating system.\nSee the following link for details.\n\n* [Contribution guide](CONTRIBUTING.md)\n\n## License \u0026 Notice\n\nNeural Network Libraries is provided under the [Apache License Version 2.0](LICENSE) license.\n\nIt also depends on some open source software packages. For more information, see [LICENSES](third_party/LICENSES.md).\n\n## Citation\n\n```\n@misc{hayakawa2021neural,\n      title={Neural Network Libraries: A Deep Learning Framework Designed from Engineers' Perspectives}, \n      author={Takuya Narihira and Javier Alonsogarcia and Fabien Cardinaux and Akio Hayakawa\n              and Masato Ishii and Kazunori Iwaki and Thomas Kemp and Yoshiyuki Kobayashi\n              and Lukas Mauch and Akira Nakamura and Yukio Obuchi and Andrew Shin and Kenji Suzuki\n              and Stephen Tiedmann and Stefan Uhlich and Takuya Yashima and Kazuki Yoshiyama},\n      year={2021},\n      eprint={2102.06725},\n      archivePrefix={arXiv},\n      primaryClass={cs.LG}\n}\n```\n","funding_links":[],"categories":["Neural Networks (NN) and Deep Neural Networks (DNN)","Python","Deep Learning","机器学习框架","分布式机器学习","Deep Learning Framework"],"sub_categories":["NN/DNN Software Frameworks","Others","High-Level DL APIs"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsony%2Fnnabla","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsony%2Fnnabla","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsony%2Fnnabla/lists"}