https://github.com/derunelabs/enola
library to perform some tensor operation for deep and machine learning
https://github.com/derunelabs/enola
algorithms cpp cpp17 opencl tensor
Last synced: 7 months ago
JSON representation
library to perform some tensor operation for deep and machine learning
- Host: GitHub
- URL: https://github.com/derunelabs/enola
- Owner: DeRuneLabs
- License: mit
- Created: 2025-03-16T22:51:15.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2025-05-14T05:43:49.000Z (11 months ago)
- Last Synced: 2025-05-14T06:49:06.312Z (11 months ago)
- Topics: algorithms, cpp, cpp17, opencl, tensor
- Language: C++
- Homepage:
- Size: 160 KB
- Stars: 5
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Enola
Library for perform some tensor operation for specific project, built with C++17
currently including:
- Function:
- Sigmoid
- Activation Function:
- Binary Step
- Exponential Linear Unit (ELU)
- Rectified Linear Unit (RELU)
- Softplus
- Squareplus
- Swish
- Operation:
- Deep Copy
- Score:
- Mean Absolute Error
- Mean Square Error
- Tensor:
- Operation
- Storage ( Currently Support on CPU Process )
- Tensor View
## basic usage
**Simple neural Network**
```cpp
#include ""
#include
#include
int main() {
try {
// define the arch of neural network
// each value in the vector representing number of neuron in layer
// - first value is size of input layer
// - intermediate value representing hidden layers
// last value is size of the output layer
std::vector layer_size = {
2, // inputs: 2 neuron (two features)
3, // hidden layer: 3 neuron
1, // output: 1 neuron (binary classification or regression)
};
// initialize neural network with specific architecture
enola::neural::NeuralNetwork nn(layer_size);
// prepare input data for neural network
// this example use simple input vector with two values
std::vector input = {0.5, 0.8};
// perform forward propagation to computing the output of neural network
std::vector output = nn.forward_propagation(input);
// print output of the neural network
std::cout << "output: ";
for (double val : output) {
std::cout << val << " "; // print each output
}
std::cout << std::endl;
} catch (const std::exception &error) {
std::cerr << "error: " << error.what() << std::endl;
}
return 0;
}
```
for more example check on [example](example) folder.
## Tensor Storage With GPU process
Enola currently support Tensor Storage processing by GPU with OpenCL (open computing language) for perform tensor storage and processing task, OpenCL is good for help for processing gpu, OpenCL provide APIs to manage memory on the GPU explicitly, so we can allocate memory buffer for tensors, for more information you can check [here](https://www.khronos.org/opencl/) about OpenCL