Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/karpathy/micrograd
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
https://github.com/karpathy/micrograd
Last synced: 7 days ago
JSON representation
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
- Host: GitHub
- URL: https://github.com/karpathy/micrograd
- Owner: karpathy
- License: mit
- Created: 2020-04-13T04:31:18.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2024-05-09T19:10:29.000Z (6 months ago)
- Last Synced: 2024-06-09T14:33:14.441Z (5 months ago)
- Language: Jupyter Notebook
- Homepage:
- Size: 242 KB
- Stars: 8,629
- Watchers: 143
- Forks: 1,176
- Open Issues: 39
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- AiTreasureBox - karpathy/micrograd - 11-02_10321_1](https://img.shields.io/github/stars/karpathy/micrograd.svg)|A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API| (Repos)
- stars - karpathy/micrograd - A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API (Jupyter Notebook)
- awesome-python-machine-learning-resources - GitHub - 40% open · ⏱️ 18.04.2020): (Pytorch实用程序)
README
# micrograd
![awww](puppy.jpg)
A tiny Autograd engine (with a bite! :)). Implements backpropagation (reverse-mode autodiff) over a dynamically built DAG and a small neural networks library on top of it with a PyTorch-like API. Both are tiny, with about 100 and 50 lines of code respectively. The DAG only operates over scalar values, so e.g. we chop up each neuron into all of its individual tiny adds and multiplies. However, this is enough to build up entire deep neural nets doing binary classification, as the demo notebook shows. Potentially useful for educational purposes.
### Installation
```bash
pip install micrograd
```### Example usage
Below is a slightly contrived example showing a number of possible supported operations:
```python
from micrograd.engine import Valuea = Value(-4.0)
b = Value(2.0)
c = a + b
d = a * b + b**3
c += c + 1
c += 1 + c + (-a)
d += d * 2 + (b + a).relu()
d += 3 * d + (b - a).relu()
e = c - d
f = e**2
g = f / 2.0
g += 10.0 / f
print(f'{g.data:.4f}') # prints 24.7041, the outcome of this forward pass
g.backward()
print(f'{a.grad:.4f}') # prints 138.8338, i.e. the numerical value of dg/da
print(f'{b.grad:.4f}') # prints 645.5773, i.e. the numerical value of dg/db
```### Training a neural net
The notebook `demo.ipynb` provides a full demo of training an 2-layer neural network (MLP) binary classifier. This is achieved by initializing a neural net from `micrograd.nn` module, implementing a simple svm "max-margin" binary classification loss and using SGD for optimization. As shown in the notebook, using a 2-layer neural net with two 16-node hidden layers we achieve the following decision boundary on the moon dataset:
![2d neuron](moon_mlp.png)
### Tracing / visualization
For added convenience, the notebook `trace_graph.ipynb` produces graphviz visualizations. E.g. this one below is of a simple 2D neuron, arrived at by calling `draw_dot` on the code below, and it shows both the data (left number in each node) and the gradient (right number in each node).
```python
from micrograd import nn
n = nn.Neuron(2)
x = [Value(1.0), Value(-2.0)]
y = n(x)
dot = draw_dot(y)
```![2d neuron](gout.svg)
### Running tests
To run the unit tests you will have to install [PyTorch](https://pytorch.org/), which the tests use as a reference for verifying the correctness of the calculated gradients. Then simply:
```bash
python -m pytest
```### License
MIT