https://github.com/savernish/forgenn
forgeNN is a in-development purpose‑built neural network framework combining a transparent NumPy autograd engine with a Keras‑like API and performance oriented primitives. Developed by a college student with an ambitious feature pipeline.
https://github.com/savernish/forgenn
artificial-intelligence deep-learning keras machine-learning mlp-networks neural-network numpy pytorch tensorflow tensors
Last synced: 4 months ago
JSON representation
forgeNN is a in-development purpose‑built neural network framework combining a transparent NumPy autograd engine with a Keras‑like API and performance oriented primitives. Developed by a college student with an ambitious feature pipeline.
- Host: GitHub
- URL: https://github.com/savernish/forgenn
- Owner: Savernish
- License: mit
- Created: 2025-09-06T11:18:03.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2025-09-19T20:14:05.000Z (4 months ago)
- Last Synced: 2025-09-22T02:56:04.399Z (4 months ago)
- Topics: artificial-intelligence, deep-learning, keras, machine-learning, mlp-networks, neural-network, numpy, pytorch, tensorflow, tensors
- Language: Python
- Homepage: https://www.savern.me
- Size: 4.16 MB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# forgeNN
## Table of Contents
- [Installation](#Installation)
- [Overview](#Overview)
- [Performance vs PyTorch](#Performance-vs-PyTorch)
- [Quick Start](#Quick-Start)
- [Complete Example](#Complete-Example)
- [Roadmap](#Roadmap)
- [Contributing](#Contributing)
- [Acknowledgments](#Acknowledgments)
[](https://www.python.org/downloads/)
[](https://github.com/Savernish/forgeNN)
[](https://numpy.org/)
[](https://pypi.org/project/forgeNN/)
[](https://pypi.org/project/forgeNN/)
[](https://pypi.org/project/forgeNN/)
## Installation
```bash
pip install forgeNN
```
Optional extras:
```bash
# ONNX helpers (scaffold)
pip install "forgeNN[onnx]"
# CUDA backend (scaffold; requires compatible GPU/driver)
pip install "forgeNN[cuda]"
```
## Overview
**forgeNN** is a modern neural network framework with an API built around a straightforward `Sequential` model, a fast NumPy autograd `Tensor`, and a Keras-like `compile/fit` training workflow.
This project is built and maintained by a single student developer. For background and portfolio/CV, see: https://savern.me
### Key Features
- **Fast NumPy core**: Vectorized operations with fused, stable math
- **Dynamic Computation Graphs**: Automatic differentiation with gradient tracking
- **Complete Neural Networks**: From simple neurons to complex architectures
- **Production Loss Functions**: Cross-entropy, MSE with numerical stability
- **Scaffolded Integrations**: Runtime device API for future CUDA; ONNX export/import stubs
## Performance vs PyTorch
**forgeNN is 3.52x faster than PyTorch on small models!**
| Metric | PyTorch | forgeNN | Advantage |
|--------|---------|---------|-----------|
| Training Time (MNIST) | 64.72s | 30.84s | **2.10x faster** |
| Test Accuracy | 97.30% | 97.37% | **+0.07% better** |
| Small Models (<109k params) | Baseline | **3.52x faster** | **Massive speedup** |
📊 Comparison and detailed docs are being refreshed for v2; see examples/ for runnable demos.
## Quick Start
### Keras-like Training (compile/fit)
```python
model = fnn.Sequential([
fnn.Input((20,)), # optional Input layer seeds summary & shapes
fnn.Dense(64) @ 'relu',
fnn.Dense(32) @ 'relu',
fnn.Dense(3) @ 'linear'
])
# Optionally inspect architecture
model.summary() # or model.summary((20,)) if no Input layer
opt = fnn.Adam(lr=1e-3) # or other optimizers (adamw, sgd, etc)
compiled = fnn.compile(model,
optimizer=opt,
loss='cross_entropy',
metrics=['accuracy'])
compiled.fit(X, y, epochs=10, batch_size=64)
loss, metrics = compiled.evaluate(X, y)
# Tip: `mse` auto-detects 1D integer class labels for (N,C) logits and one-hot encodes internally.
# model.summary() can be called any time after construction if an Input layer or input_shape is provided.
```
## Complete Example
See `examples/` for full fledged demos
## Links
- **PyPI Package**: https://pypi.org/project/forgeNN/
- **Documentation**: v2 guides coming soon; examples in `examples/`
- **Issues**: GitHub Issues for bug reports and feature requests
- **Portfolio/CV**: https://savern.me
## Roadmap (post v2.0.0)
- CUDA backend and device runtime
- Device abstraction for `Tensor` and layers
- Initial CUDA kernels (Conv, GEMM, elementwise) and CPU/CUDA parity tests
- Setup and troubleshooting guide
- ONNX: export and import (full coverage for the core API)
- Export `Sequential` graphs with Conv/Pool/Flatten/Dense/LayerNorm/Dropout/activations
- Import linear and branched graphs where feasible; shape inference checks
- Round‑trip parity tests and examples
- Model save and load
- Architecture JSON + weights (NPZ) format
- `state_dict`/`load_state_dict` compatibility helpers
- Versioning and minimal migration guidance
- Transformer positional encodings
- Sinusoidal `PositionalEncoding` and learnable `PositionalEmbedding`
- Tiny encoder demo with text classification walkthrough
- Performance and stability
- CPU optimizations for conv/pool paths, memory reuse, and fewer allocations
- Threading guidance (MKL/OpenBLAS), deterministic runs, and profiling notes
- Documentation
- Practical guides for `Sequential`, `compile/fit`, model I/O, ONNX, and CUDA setup
- Design overview of autograd and execution model
## Contributing
I am not currently accepting contributions, but I'm always open to suggestions and feedback!
## Acknowledgments
- Inspired by educational automatic differentiation tutorials (micrograd)
- Built for both learning and production use
- Optimized with modern NumPy practices
- **Available on PyPI**: `pip install forgeNN`
---