Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/hallerpatrick/phgrad
phgrad - Cause there are not enough unusable autograd libraries in python
https://github.com/hallerpatrick/phgrad
Last synced: about 2 months ago
JSON representation
phgrad - Cause there are not enough unusable autograd libraries in python
- Host: GitHub
- URL: https://github.com/hallerpatrick/phgrad
- Owner: HallerPatrick
- Created: 2023-10-24T20:22:40.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-08-01T19:52:55.000Z (5 months ago)
- Last Synced: 2024-10-07T11:23:28.246Z (3 months ago)
- Language: Python
- Homepage:
- Size: 1.82 MB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# phgrad
### Cause there are not enough unusable autograd libraries in python
![Logo](logo.png)
A tiny [autograd](https://en.wikipedia.org/wiki/Automatic_differentiation) library for learning purposes, inspired by [micrograd](https://github.com/karpathy/micrograd/tree/master)
and [tinygrad](https://github.com/tinygrad/tinygrad).Everything at this point is experimental and not optimized for performance. So be aware that if you
touch this library, you will probably break something.
The goal is to optimize the library for performance and add more features. I will mostly adjust this library to
be usable in NLP tasks.I will try to keep the API as close as possible to [PyTorch](https://pytorch.org/). A goal is to
provide CUDA support, while keeping the dependency list as small as possible. (Currently only numpy, and now cupy).The [example](./examples) folder will contain some examples of how to use the library.
### Example
```python
from phgrad.engine import Tensor
from phgrad.nn import Linear, Module# We now have cuda support!
device = "cuda"class MLP(Module):
"""A simple Multi Layer Perceptron."""def __init__(
self,
inp_dim: int,
hidden_size: int,
output_dim: int,
bias: bool = True,
):
super().__init__()
self.l1 = Linear(inp_dim, hidden_size, bias=bias)
self.l2 = Linear(hidden_size, output_dim, bias=bias)def forward(self, x: Tensor) -> Tensor:
x = self.l1(x)
x = x.relu()
x = self.l2(x)
return xmodel = MLP(784, 256, 10).to(device)
x = Tensor(np.random.randn(32, 784), device=device)
y = model(x).mean()
y.backward()
```### Resources
1. https://github.com/torch/nn/blob/master/doc/transfer.md
2. https://github.com/karpathy/micrograd/tree/master
3. https://github.com/geohot/ai-notebooks/blob/master/mnist_from_scratch.ipynb
4. https://github.com/ICEORY/softmax_loss_gradient
5. https://notesbylex.com/negative-log-likelihood#:~:text=Negative%20log%2Dlikelihood%20is%20a,all%20items%20in%20the%20batch.
6. https://github.com/huggingface/candle/tree/main