https://github.com/kostareg/evolution-rs
Simulating the evolution of tiny neural networks.
https://github.com/kostareg/evolution-rs
Last synced: about 2 months ago
JSON representation
Simulating the evolution of tiny neural networks.
- Host: GitHub
- URL: https://github.com/kostareg/evolution-rs
- Owner: kostareg
- Created: 2025-02-19T17:08:24.000Z (2 months ago)
- Default Branch: main
- Last Pushed: 2025-03-03T04:55:54.000Z (about 2 months ago)
- Last Synced: 2025-03-03T05:24:46.683Z (about 2 months ago)
- Language: Rust
- Homepage:
- Size: 23.4 KB
- Stars: 9
- Watchers: 1
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-quads - evolution-rs - simulating the evolution of tiny neural networks. (Apps or visualizations / Apps or visualizations: On top of macroquad)
README
# Evolution
Simulating the evolution of tiny neural networks.
In this demo, 200 entities called "blobs" are placed in a 128x128 environment. Each one is wired with a randomly generated neural network. At the end of each generation, all blobs on the left half are removed (highlighted in red), and the remaining blobs are used to repopulate the next generation. This demonstration shows that as the generations progress, blobs gain the tendency to move towards the right, since that is the best method of survival per generation. A sample blob is highlighted in blue in each generation, and its neural network data is displayed on the user interface.
https://github.com/user-attachments/assets/16f84834-51f0-4f74-8cd2-63bffc20d8f4
## Analyzing neural networks
While it may be impossible to understand the reasoning behind large neural networks (such as those that power large language models), we can make an attempt at analyzing the networks of these evolutionary stable strategies thanks to their tiny size (just 8 neurons).
The neural network of the final sample in the video above (from generation 10) is shown in Figure 1. Notably, the sum of the values inputted to Mx (at the bottom left) tend to be positive, so when they are plugged into `probability . tanh . sum` (the activation function for Mx), there is an increasing likelihood that the blob will move towards the right.

Figure 1: neural network sample 10.
## Next steps
I would love to implement rare mutations in order to increase survivorship in changing environments. In terms of simulation logic, I would like to implement collisions, killing neighbours, and pheremones. For data analysis, I want to create tools such as a streamlined neural network directed graph generator.