https://github.com/yeonghyeon/white-box-layer
Deep Learning Package based on TensorFlow.
https://github.com/yeonghyeon/white-box-layer
deep-learning deep-learning-library machine-learning neural-network python tensorflow tensorflow2
Last synced: 5 months ago
JSON representation
Deep Learning Package based on TensorFlow.
- Host: GitHub
- URL: https://github.com/yeonghyeon/white-box-layer
- Owner: YeongHyeon
- License: mit
- Created: 2021-05-28T05:53:39.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2022-05-09T06:01:58.000Z (over 3 years ago)
- Last Synced: 2024-11-05T05:42:50.538Z (11 months ago)
- Topics: deep-learning, deep-learning-library, machine-learning, neural-network, python, tensorflow, tensorflow2
- Language: Python
- Homepage:
- Size: 148 KB
- Stars: 8
- Watchers: 2
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
---
White-Box-Layer is a Python module for deep learning built on top of TensorFlow and is distributed under the MIT license.The project was started in May 2021 by YeongHyeon Park.
This project does not limit for participation.
Contribute now!## Installation
### Dependencies
whiteboxlayer requires:
* Numpy: 1.19.5
* Scipy: 1.4.1
* TensorFlow: 2.4.0### User installation
You can install the white-box-layer via simple command as below.
``` sh
$ pip install whiteboxlayer
```## Development
We welcome new contributors of all experience levels. The white-box-layer community goals are to be helpful, welcoming, and effective. The Development Guide has detailed information about contributing code, documentation, tests, and more. We've included some basic information in this README.## Example
### Example for Convolutional Neural Network
An example of constructing a convolutional neural network is covered. The relevant source code is additionally provided following links.
* Jupyter notebook#### Define TensorFlow based module
``` python
class Neuralnet(tf.Module):def __init__(self, **kwargs):
super(Neuralnet, self).__init__()self.who_am_i = kwargs['who_am_i']
self.dim_h = kwargs['dim_h']
self.dim_w = kwargs['dim_w']
self.dim_c = kwargs['dim_c']
self.num_class = kwargs['num_class']
self.filters = kwargs['filters']self.layer = wbl.Layers()
self.forward = tf.function(self.__call__)
@tf.function
def __call__(self, x, verbose=False):logit = self.__nn(x=x, name=self.who_am_i, verbose=verbose)
y_hat = tf.nn.softmax(logit, name="y_hat")return logit, y_hat
def __nn(self, x, name='neuralnet', verbose=True):
for idx, _ in enumerate(self.filters[:-1]):
if(idx == 0): continue
x = self.layer.conv2d(x=x, stride=1, \
filter_size=[3, 3, self.filters[idx-1], self.filters[idx]], \
activation='relu', name='%s-%dconv' %(name, idx), verbose=verbose)
x = self.layer.maxpool(x=x, ksize=2, strides=2, \
name='%s-%dmp' %(name, idx), verbose=verbose)x = tf.reshape(x, shape=[x.shape[0], -1], name="flat")
x = self.layer.fully_connected(x=x, c_out=self.filters[-1], \
activation='relu', name="%s-clf0" %(name), verbose=verbose)
x = self.layer.fully_connected(x=x, c_out=self.num_class, \
activation=None, name="%s-clf1" %(name), verbose=verbose)return x
```#### Initializing module
``` python
model = Neuralnet(\
who_am_i="CNN", \
dim_h=28, dim_w=28, dim_c=1, \
num_class=10, \
filters=[1, 32, 64, 128])dummy = tf.zeros((1, model.dim_h, model.dim_w, model.dim_c), dtype=tf.float32)
model.forward(x=dummy, verbose=True)
```#### Results
``` sh
Conv (CNN-1conv) (1, 28, 28, 1) -> (1, 28, 28, 32)
MaxPool (CNN-1mp) (1, 28, 28, 32) > (1, 14, 14, 32)
Conv (CNN-2conv) (1, 14, 14, 32) -> (1, 14, 14, 64)
MaxPool (CNN-2mp) (1, 14, 14, 64) > (1, 7, 7, 64)
FC (CNN-clf0) (1, 3136) -> (1, 128)
FC (CNN-clf1) (1, 128) -> (1, 10)
Conv (CNN-1conv) (1, 28, 28, 1) -> (1, 28, 28, 32)
MaxPool (CNN-1mp) (1, 28, 28, 32) > (1, 14, 14, 32)
Conv (CNN-2conv) (1, 14, 14, 32) -> (1, 14, 14, 64)
MaxPool (CNN-2mp) (1, 14, 14, 64) > (1, 7, 7, 64)
FC (CNN-clf0) (1, 3136) -> (1, 128)
FC (CNN-clf1) (1, 128) -> (1, 10)
```