https://github.com/e1evenn/shinnosuke
A keras-like API deep learning framework,realized by Numpy only.Support CNN, RNN, LSTM, Dense, etc.
https://github.com/e1evenn/shinnosuke
deep-learning deep-learning-algorithms deep-learning-framework deep-learning-library deep-learning-tutorial deep-neural-networks
Last synced: 23 days ago
JSON representation
A keras-like API deep learning framework,realized by Numpy only.Support CNN, RNN, LSTM, Dense, etc.
- Host: GitHub
- URL: https://github.com/e1evenn/shinnosuke
- Owner: E1eveNn
- License: mit
- Created: 2019-08-08T05:12:57.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2020-04-16T12:12:23.000Z (about 5 years ago)
- Last Synced: 2025-04-12T06:12:50.098Z (23 days ago)
- Topics: deep-learning, deep-learning-algorithms, deep-learning-framework, deep-learning-library, deep-learning-tutorial, deep-neural-networks
- Language: Python
- Homepage:
- Size: 246 KB
- Stars: 21
- Watchers: 1
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Shinnosuke : Deep Learning Framework
![]()
## Descriptions
Shinnosuke is a high-level neural network which has almost the same API to Keras with slightly differences. It was written by Python only, and dedicated to realize experimentations quickly.
Here are some features of Shinnosuke:
1. Based on **Numpy** (CPU version) and **native** to Python. [GPU Version](https://github.com/eLeVeNnN/shinnosuke-gpu)
2. **Without** any other **3rd-party** deep learning library.
3. **Keras-like API**, several basic AI Examples are provided, easy to get start.
4. Support commonly used models such as: *Dense, Conv2D, MaxPooling2D, LSTM, SimpleRNN, etc*.
5. **Sequential** model (for most sequence network combinations ) and **Functional** model (for resnet, etc) are implemented.
6. Training is conducted on forward **graph** and backward **graph**.
7. **Autograd** is supported .Shinnosuke is compatible with: **Python 3.x (3.6 is recommended)**
`###################################### \^^shinnosuke documents^^/ ######################################`
------
## Getting started
The core networks of Shinnosuke is a model, which provide a way to combine layers. There are two model types: `Sequential` (a linear stack of layers) and `Functional` (build a graph for layers).
Here is a example of `Sequential` model:
```python
from shinnosuke.models import Sequentialm = Sequential()
```Using `.add()` to connect layers:
```python
from shinnosuke.layers import Densem.add(Dense(n_out=500, activation='relu', n_in=784)) #must be specify n_in if current layer is the first layer of model
m.add(Dense(n_out=10, activation='softmax')) #no need to specify n_in as shinnosuke will automatic calculate the input and output shape
```Here are some differences with Keras, n_out and n_in are named units and input_dim in Keras respectively.
Once you have constructed your model, you should configure it with `.compile()` before training:
```python
m.compile(loss='sparse_categorical_crossentropy', optimizer='sgd')
```If you apply softmax to multi-classify task and your labels are one-hot encoded vectors/matrix, you shall specify loss as sparse_categorical_crossentropy, otherwise use categorical_crossentropy. (While in Keras categorical_crossentropy supports for one-hot encoded labels). And Shinnosuke model only supports one metrics--**accuracy**, which no need to specify in `compile`.
Use `print()` function to see details of model:
```python
print(m)
# you shall see
***************************************************************************
Layer(type) Output Shape Param Connected to
###########################################################################
Dense (None, 500) 392500
---------------------------------------------------------------------------
Dense (None, 10) 5010 Dense
---------------------------------------------------------------------------
***************************************************************************
Total params: 397510
Trainable params: 397510
Non-trainable params: 0
```Having finished `compile`, you can start training your data in batches:
```python
#trainX and trainy are Numpy arrays --for 2 dimension data, trainX's shape should be (training_nums,input_dim)
m.fit(trainX, trainy, batch_size = 128, epochs = 5, validation_ratio = 0., draw_acc_loss = True)
```By specify `validation_ratio` = `(0.0,1.0]`, shinnosuke will split validation data from training data according to validation_ratio, otherwise validation_ratio=0. means no validation data. Alternatively you can feed validation_data manually:
```python
m.fit(trainX, trainy, batch_size = 128, epochs = 5, validation_data = (validX, validy), draw_acc_loss = True)
```If `draw_acc_loss` = **True**, a dynamic updating figure will be shown in the training process, like below:
![]()
Once completing training your model, you can save or load your model by `save()` / `load()`, respectively.
```python
m.save(save_path)
m.load(model_path)
```Evaluate your model performance by `.evaluate()`:
```python
acc, loss = m.evaluate(testX, testy, batch_size=128)
```Or obtain predictions on new data:
```python
y_hat = m.predict(x_test)
```For `Functional` model, first instantiate an `Input` layer:
```python
from shinnosuke.layers import InputX_input = Input(shape = (None, 1, 28, 28)) #(batch_size,channels,height,width)
```You need to specify the input shape, notice that for Convolutional networks,data's channels must be in the `axis 1` instead of `-1`, and you should state batch_size as None which is unnecessary in Keras.
Then Combine your layers by functional API:
```python
from shinnosuke.models import Model
from shinnosuke.layers import Conv2D,MaxPooling2D
from shinnosuke.layers import Activation
from shinnosuke.layers import BatchNormalization
from shinnosuke.layers import Flatten,DenseX = Conv2D(8, (2, 2), padding = 'VALID', initializer = 'normal', activation = 'relu')(X_input)
X = MaxPooling2D((2, 2))(X)
X = Flatten()(X)
X = Dense(10, initializer = 'normal', activation = 'softmax')(X)
model = Model(inputs = X_input, outputs = X)
model.compile(optimizer = 'sgd', loss = 'sparse_categorical_cross_entropy')
model.fit(trainX, trainy, batch_size = 256, epochs = 80, validation_ratio = 0.)
```Pass inputs and outputs layer to `Model()`, and then compile and fit model like `Sequential`model.
Building an image classification model, a question answering system or any other model is just as convenient and fast~
In the [Examples folder](https://github.com/eLeVeNnN/shinnosuke/Examples/) of this repository, you can find more advanced models.
------
## Both dynamic and static graph features
As you will see soon in below, Shinnosuke has two basic classes - Layer and Node. For Layer, operations between layers can be described like this (here gives an example of '+' ):
```py
from shinnosuke.layers import Input,Add
from shinnosuke.layers import DenseX = Input(shape = (3, 5))
X_shortcut = X
X = Dense(5)(X) #Dense will output a (3,5) tensor
X = Add()([X_shortcut, X])
```Meanwhile Shinnosuke will construct a graph as below:
![]()
While Node Operations have both dynamic graph and static graph features:
```python
from shinnosuke.layers.Base import Variablex = Variable(3)
y = Variable(5)
z = x + y
print(z.get_value())
```You suppose get value 8, at same time shinnosuke construct a graph as below:
![]()
## Autograd
What is autograd? In a word, It means automatically calculate the network's gradients without any prepared backward codes for users, Shinnosuke's autograd supports for several operators, such as +, -, *, /, etc... Here gives an example:
For a simple fully connected neural network, you can use `Dense()` to construct it:
```python
from shinnosuke.models import Sequential
from shinnosuke.layers import Dense
import numpy as np#announce a Dense layer
fullyconnected = Dense(4, n_in = 5)
m = Sequential()
m.add(fullyconnected)
m.compile(optimizer = 'sgd', loss = 'mse') #don't mean to train it, use compile to initialize parameters
#initialize inputs
np.random.seed(0)
X = np.random.rand(3, 5)
#feed X as fullyconnected's inputs
fullyconnected.feed(X, 'inputs')
#forward
fullyconnected.forward()
out1 = fullyconnected.get_value()
print(out1.get_value())
#feed gradient to fullyconnected
fullyconnected.feed(np.ones_like(out1), 'grads')
#backward
fullyconnected.backward()
W, b = fullyconnected.variables
print(W.grads)
```We can also construct the same layer by using following codes:
```python
from shinnosuke.layers import Variablea = Variable(X) # the same as X in previous fullyconnected
c = Variable(W.get_value()) # the same value as W in previous fullyconnected
d = Variable(b.get_value()) # the same value as b in previous fullyconnected
out2 = a @ c + d # @ represents for matmul
print(out2.get_value())
out2.grads = np.ones_like(out2.get_value()) #specify gradients
# by using grad(),shinnosuke will automatically calculate the gradient from out2 to c
c.grad()
print(c.grads)
```Guess what? out1 has the same value of out2, and so did W and c's grads. This is the magic autograd of shinnosuke. **By using this feature, users can implement other networks as wishes without writing any backward codes.**
See autograd example in [Here!](https://github.com/eLeVeNnN/shinnosuke-gpu/blob/master/Examples/autograd.ipynb)
## Installation
Before installing Shinnosuke, please install the following **dependencies**:
- Numpy = 1.15.0 (recommend)
- matplotlib = 3.0.3 (recommend)Then you can install Shinnosuke by using pip:
`$ pip install shinnosuke`
**Installation from Github source will be supported in the future.**
------
## Supports
### Two basic class:
#### - Layer:
- Dense
- Flatten
- Conv2D
- MaxPooling2D
- MeanPooling2D
- Activation
- Input
- Dropout
- Batch Normalization
- Layer Normalization
- Group Normalization
- TimeDistributed
- SimpleRNN
- LSTM
- Embedding
- GRU (waiting for implemented)
- ZeroPadding2D
- Add
- Minus
- Multiply
- Matmul#### - Node:
- Variable
- Constant### Optimizers
- StochasticGradientDescent
- Momentum
- RMSprop
- AdaGrad
- AdaDelta
- AdamWaiting for implemented more
### Objectives
- MeanSquaredError
- MeanAbsoluteError
- BinaryCrossEntropy
- SparseCategoricalCrossEntropy
- CategoricalCrossEntropy### Activations
- Relu
- Linear
- Sigmoid
- Tanh
- Softmax### Initializations
- Zeros
- Ones
- Uniform
- LecunUniform
- GlorotUniform
- HeUniform
- Normal
- LecunNormal
- GlorotNormal
- HeNormal
- Orthogonal### Regularizes
waiting for implement.
### Utils
- get_batches (generate mini-batch)
- to_categorical (convert inputs to one-hot vector/matrix)
- concatenate (concatenate Nodes that have the same shape in specify axis)
- pad_sequences (pad sequences to the same length)## Contact
- Email: [email protected]