https://github.com/simpsonresearch/relu_vs_gelu
https://github.com/simpsonresearch/relu_vs_gelu
Last synced: 5 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/simpsonresearch/relu_vs_gelu
- Owner: simpsonresearch
- Created: 2024-06-14T14:35:51.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-06-14T19:58:54.000Z (over 1 year ago)
- Last Synced: 2025-08-11T15:30:05.602Z (5 months ago)
- Language: Python
- Size: 7.81 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# ReLU vs. GeLU
There's definitely a pretty significant difference between the two (in the case of my example in `main.py`).
GeLU seems to be relatively more accurate than ReLU, but it's not always the case. It's always good to test out different activation functions to see which one works best for your specific use case.
### Example Output
An example output from running `main.py`
```
Test Loss (GeLU): 0.13099078834056854
Test Loss (ReLU): 0.31609782576560974
Actual: 30.0 | GeLU: 30.32281494140625 | ReLU: 30.15665626525879
Actual: 22.0 | GeLU: 21.602794647216797 | ReLU: 21.220478057861328
```