Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/bddean/BQNprop
Toy backpropagation implementation written in BQN.
https://github.com/bddean/BQNprop
Last synced: 28 days ago
JSON representation
Toy backpropagation implementation written in BQN.
- Host: GitHub
- URL: https://github.com/bddean/BQNprop
- Owner: bddean
- Created: 2021-06-26T23:44:20.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2021-06-27T03:11:58.000Z (over 3 years ago)
- Last Synced: 2024-08-04T00:12:01.123Z (4 months ago)
- Size: 2.93 KB
- Stars: 3
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-bqn - BQNprop
README
# BQNprop
BQNprop is a toy backpropagation implementation I wrote to learn about
machine learning and array programming. It's extremely bare-bones:
Hyperparameters are all hardcoded, there are no bias nodes, and I haven't
even tested this on any real data.I am still pleasantly surprised at how short the implementation ended
up being. Only 17 lines excluding comments, blank lines and testing.# Future work
Real ML libraries apparently use Automatic Differentiation (AD)
nowadays. I've been looking into http://conal.net/papers/essence-of-ad/
as a potential implementation approach, which involves compiling functions
to an efficiently differentiable form. For BQN this might mean a codegen step.And ideally we'd want BQN running on a GPU for this.
# References
I used [3Blue1Brown](https://www.youtube.com/watch?v=Ilg3gGewQ5U&t=750s)
and [Michael Nielson's online
textbook](http://neuralnetworksanddeeplearning.com/chap2.html)
to learn the math. I used [this worked-out example
](https://steemit.com/ai/@ralampay/training-a-neural-network-a-numerical-example-part-1)
as a test.And, of course, the [BQN docs](https://mlochbaum.github.io/BQN/).