https://github.com/csirmaz/trained-linearization
Interpreting neural networks by reducing nonlinearities during training
https://github.com/csirmaz/trained-linearization
interpretability linearization lua machine-learning neural-network rule-extraction torch
Last synced: 4 months ago
JSON representation
Interpreting neural networks by reducing nonlinearities during training
- Host: GitHub
- URL: https://github.com/csirmaz/trained-linearization
- Owner: csirmaz
- License: mit
- Created: 2019-07-11T22:23:56.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2019-07-22T22:58:10.000Z (over 6 years ago)
- Last Synced: 2025-04-12T12:43:40.040Z (9 months ago)
- Topics: interpretability, linearization, lua, machine-learning, neural-network, rule-extraction, torch
- Language: TeX
- Homepage:
- Size: 111 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Interpreting Neural Networks by Reducing Nonlinearities during Training
This repo contains a short paper and sample code demonstrating
a simple solution that makes it possible to
extract rules from a neural network that employs Parametric Rectified Linear Units (PReLUs).
We introduce a force, applied in parallel to backpropagation, that
aims to reduce PReLUs into the identity function, which then causes
the neural network to collapse into a smaller system of linear functions and inequalities
suitable for review or use by human decision makers.
As this force reduces the capacity of neural networks, it is expected to help avoid overfitting as well.
Download the article in PDF format from the latest release at https://github.com/csirmaz/trained-linearization/releases/latest .