https://github.com/varunagarwal97/deep-network-compression
Compressing Deep Neural Network using Singular Value Decomposition
https://github.com/varunagarwal97/deep-network-compression
compression deep-neural-networks mnist-dataset singular-value-decomposition tensorflow-experiments
Last synced: about 1 month ago
JSON representation
Compressing Deep Neural Network using Singular Value Decomposition
- Host: GitHub
- URL: https://github.com/varunagarwal97/deep-network-compression
- Owner: varunagarwal97
- Created: 2018-04-08T04:05:58.000Z (almost 8 years ago)
- Default Branch: master
- Last Pushed: 2019-04-23T04:29:28.000Z (almost 7 years ago)
- Last Synced: 2025-04-07T01:53:16.276Z (12 months ago)
- Topics: compression, deep-neural-networks, mnist-dataset, singular-value-decomposition, tensorflow-experiments
- Language: Jupyter Notebook
- Size: 69.3 KB
- Stars: 7
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Deep-Network-Compression
## Compressing Deep Neural Networks using Singular Value Decomposiiton
This work presents an approach to compress deep neural networks by performing singular value decomposition on the weights of the trained network. The compression that we finally achieve reduces the network size by almost 96%. We test this approach on the MNIST dataset, in order to prove its validity. This approach can be taken to much deeper networks, too.
We start off by training a deep neural network with five fully connected hidden layers with 1024 units each, to classify the MNIST dataset with a 98% accuracy. Thereafter, we test the accuracy of the same network after performing SVD on its weights, trying out many low-rank approximations of the trained weight matrices.
Finally, we obtain a D=20 rank approximation of each of the weight matrices, and fine tune the network to achieve almost the same accuracy as that of the original network.
Hope that you find this interesting!