https://github.com/willayy/modularml-lib
Library version of modularml thats easily installed and used.
https://github.com/willayy/modularml-lib
Last synced: 25 days ago
JSON representation
Library version of modularml thats easily installed and used.
- Host: GitHub
- URL: https://github.com/willayy/modularml-lib
- Owner: willayy
- License: mit
- Created: 2025-04-30T13:52:17.000Z (about 1 month ago)
- Default Branch: main
- Last Pushed: 2025-05-08T09:42:52.000Z (about 1 month ago)
- Last Synced: 2025-05-08T10:33:47.277Z (about 1 month ago)
- Language: C++
- Size: 11.3 MB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Contributing: .github/CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
README
# ModularML
Lightweight and modular library for building cpp integrated machine learning models from onnx models.### Installation
Install the library and link it to your project using CMake. This is done by pasting the following code into your CMakeLists.txt file.
```cmake
# ----------------------- MODULARML -------------------------- #
include(FetchContent)
FetchContent_Declare(
modularml
GIT_REPOSITORY https://github.com/willayy/modularml-lib
GIT_TAG
)
FetchContent_MakeAvailable(modularml)
target_link_libraries(MyProject PRIVATE modularml)
# ------------------------------------------------------------ #
```### Usage
```cpp
#include
#include
#include// Define the blocked gemm flag to enable the compilation of the blocked gemm function
#define USE_BLOCKED_GEMMint main() {
// Dynamically change the gemm function pointer to point to the blocked version, this works for any function
// with a matching signature.
TensorOperations::set_gemm_ptr(
mml_gemm_blocked,
mml_gemm_blocked,
mml_gemm_blocked
);// Use the TensorFactory to create tensors
auto a = TensorFactory::create_tensor({2, 3}, {1, 2, 3, 4, 5, 6});
auto b = TensorFactory::create_tensor({3, 2}, {7, 8, 9, 10, 11, 12});
auto c = TensorFactory::create_tensor({2, 3});// Perform the gemm operation using the blocked version.
TensorOperations::gemm(0, 0, 2, 2, 3, 1, a, 3, b, 2, 1, c, 2);// Print the result, ensuring that the computation works.
std::cout << "Result of gemm: " << (*c) << std::endl;return 0;
}
```#### Usage with AlexNet
```cpp
#include
#include
#include
#includeint main() {
/* Load the json file that has been created by onnx2json, that script can be downloaded externally
Or you can use the one thats found in build/_deps/modularml-src/onnx2json */
std::ifstream file("../src/alexnet_trained.json");
nlohmann::json json_model;
file >> json_model;
file.close();
// Parse the model into a model object, containing all the nodes.
auto parser = Parser_mml();
auto model = parser.parse(json_model);
// Create an input tensor thats empty
auto input = TensorFactory::create_tensor({1, 3, 224, 224});
// Create an input map
std::unordered_map input_map;
input_map["input"] = input;
// Run inference with the input map
auto output_map = model->infer(input_map);
// Get the output
auto output = std::get_if>>(&(output_map["output"]));
// Extract the output tensor and print it!
std::cout << "Output: " << (*output) << std::endl;
return 0;
}
```### Contributing
We welcome contributions!
Please read our [Contributing Guide](CONTRIBUTING.md) for instructions on how to get started.### License
This project is licensed under the [MIT License](LICENSE).### Acknowledgments
This library is a fork from the framework that was developed as part of a Bachelor Thesis at Chalmers University of Technology. Which can be found [here](https://github.com/willayy/modularml)