https://github.com/deniztemur00/simplegrad
Simple auto gradient project that supports basic operations written in C++ from scratch.
https://github.com/deniztemur00/simplegrad
cpp deep-learning micrograd neural-network python
Last synced: 28 days ago
JSON representation
Simple auto gradient project that supports basic operations written in C++ from scratch.
- Host: GitHub
- URL: https://github.com/deniztemur00/simplegrad
- Owner: deniztemur00
- Created: 2024-11-13T13:19:50.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2024-12-17T13:29:14.000Z (5 months ago)
- Last Synced: 2025-03-25T12:46:12.551Z (about 2 months ago)
- Topics: cpp, deep-learning, micrograd, neural-network, python
- Language: Python
- Homepage:
- Size: 86.9 KB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# SimpleGrad
SimpleGrad is a lightweight automatic differentiation library written in C++ with Python bindings.
## Prerequisites
- Python >= 3.6
- g++/gcc
- CMake## Build & Installation
1. Clone the repository:
```bash
git clone https://github.com/deniztemur00/simplegrad.git
cd simplegrad
```
2. Build with makefile:
```bash
make build-python
```
## Features- Multi-layer perceptron (MLP) which can be used for regression and classification tasks
- Supports basic arithmetic operations
- Lightweight and easy to use
- Gradient computation
- Backpropagation
- Numpy compatibility## Usage
Here's a quick example of how to use MLP in SimpleGrad:
```python
from simplegrad import MLP, Node
from sklearn import datasets# Define the model
X, y = datasets.make_classification(
n_samples=1000,
n_features=10,
n_classes=2,
random_state=42, # for reproducibility
)lr = 0.01
batch_size = 16
epochs = 10# Define the model
model = MLP(
10, [12, 1]
) # 2 input nodes, 2 hidden layers with arbitrary sizes, 1 output node# Training data
n_batches = (len(X) + batch_size - 1) // batch_size # Ceiling divisionfor epoch in range(epochs):
epoch_loss = 0.0
for i in range(0, len(X), batch_size):
batch_X = X[i : i + batch_size]
batch_y = y[i : i + batch_size]
current_batch_size = len(batch_X) # Handle last batchbatch_loss = 0.0
#model.zero_grad() # gradients are automatically reset after step function# Accumulate gradients over batch
for x, y_true in zip(batch_X, batch_y):
y_hat = model(x)[0]
y_true = Node(y_true)
loss = (y_hat - y_true) ** 2
loss = loss * (1.0 / current_batch_size) # Normalize loss
batch_loss += loss.data()
loss.backward()model.step(lr) # Update weights using accumulated gradients
epoch_loss += batch_loss# Average loss over all batches
print(f"Epoch {epoch+1}, Average Loss: {epoch_loss/n_batches:.3f}")
```
You can execute the above code by running the following command:
```bash
make run
```## Testing
Tests are written to ensure the correctness of the Node class. Thus making sure MLP works as expected. You can run tests with following command:
```bash
make test
```
## LicenseThis project is licensed under the MIT License.
## Acknowledgements
This project was inspired by the [micrograd](https://github.com/karpathy/micrograd) project by Andrej Karpathy.
## TODO
- Compile and build for other platforms
- Publish on pypi
- Cyclic reference optimization