Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/evcu/numpy_autograd
a simple implementation of autograd engine
https://github.com/evcu/numpy_autograd
autograd ml numpy pytorch variable
Last synced: 1 day ago
JSON representation
a simple implementation of autograd engine
- Host: GitHub
- URL: https://github.com/evcu/numpy_autograd
- Owner: evcu
- Created: 2018-06-15T23:24:38.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2018-09-22T01:10:15.000Z (over 6 years ago)
- Last Synced: 2025-01-11T17:34:14.045Z (3 days ago)
- Topics: autograd, ml, numpy, pytorch, variable
- Language: Jupyter Notebook
- Size: 64.5 KB
- Stars: 24
- Watchers: 4
- Forks: 4
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# numpy_autograd
In this repo I aim to motivate and show how to write an automatic differentiation library. There are various strategies to perform automatic differentiation and they each have different strengths and weaknesses. For a an overview of various methods used please refer to [1]. Py-Torch uses a graph based automatic differentiation.Every operation performed on tensors can be shown as a DAG (directed acylic graph). In the case of neural networks, the loss value calculated for a given mini-batch is the last node of the graph. Chain rule is very powerful and yet a very simple rule. Thinking in terms of the DAG, what chain rule tells us to take the derivative on a node if the output gradient of the node is completely accumulated. If we somehow make each node in this graph to remember its parents. We can run a topological sort on the DAG and call the derivative function of the nodes in this order. That's a very simple overview of how auto-grad in [PyTorch](https://pytorch.org/) works and it is very simple to implement! Let's do it.
[1] Automatic differentiation in machine learning: a survey https://arxiv.org/abs/1502.05767