Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jamormoussa/nanotorch
NanoTorch is Deep Learning Library from scratch using Numpy and Math.
https://github.com/jamormoussa/nanotorch
nanotoch nanotorch numpy python pytorch
Last synced: about 1 month ago
JSON representation
NanoTorch is Deep Learning Library from scratch using Numpy and Math.
- Host: GitHub
- URL: https://github.com/jamormoussa/nanotorch
- Owner: JamorMoussa
- Created: 2024-01-12T23:00:56.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-07-08T11:27:10.000Z (7 months ago)
- Last Synced: 2024-12-16T04:17:04.268Z (about 1 month ago)
- Topics: nanotoch, nanotorch, numpy, python, pytorch
- Language: Python
- Homepage: https://jamormoussa.github.io/NanoTorch/
- Size: 2.86 MB
- Stars: 19
- Watchers: 1
- Forks: 5
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
![/docs/images/logo.png](https://raw.githubusercontent.com/JamorMoussa/NanoTorch/main/docs/images/logo.png)
# NanoTorch
**NanoTorch** is a deep learning library (micro-framework) inspired by the PyTorch framework, which
I created using only **Math** and **Numpy** :). My purpose here is not to create a powerful deep
learning framework (maybe in the future), but solely to understand how deep learning frameworks like PyTorch and TensorFlow work behind the scenes.## Neural Networks:
Let's explore an example of building a simple neural network (essentially a Linear Regression model) with **NanoTorch**:
```python
import nanotorch as nnt
import nanotorch.nn as nn
```Let's build a simple model:
```python
class MLPModel(nn.Module):def __init__(self):
self.fc = nn.Sequential(
nn.Linear(3, 3),
nn.Sigmoid(),
nn.Linear(3, 5),
nn.Sigmoid(),
nn.Linear(5, 1)
)def forward(self, input: nnt.Tensor) -> nnt.Tensor:
return self.fc(input)
```
Let's generate a simple dataset, using the `nn.rand` function and the `nnt.dot` operation:```python
X = nnt.rand(100, 3)
y = nnt.dot(X, nnt.Tensor([1, -2, 3]).T)
```Now, let's create an instance of `MLPModel`
```python
model = MLPModel()
```We are dealing with regression task. So, the `nn.MSELoss` is chosen
```python
mse = nn.MSELoss(model.layers())
```Let's define the stochastic gradient descent optimizer
```python
opt = nnt.optim.SGD(model.layers(), lr=0.001)
```Finally, The training loop
```python
for epoch in range(30):for xi, yi in zip(X, y):
opt.zero_grad()
y_predi = model(nnt.Tensor(xi))
loss = mse(y_predi, nnt.Tensor(yi))
loss.backward()
opt.step()
print(model.layers()[0].parameter)
```The output is as follows:
```
[[1.00772484]
[1.98651816]
[3.04503581]]
```