Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/wkcn/MobulaOP
A Simple & Flexible Cross Framework Operators Toolkit
https://github.com/wkcn/MobulaOP
cross-framework cupy deep-learning mxnet numpy operators pytorch scientific-computing
Last synced: 3 months ago
JSON representation
A Simple & Flexible Cross Framework Operators Toolkit
- Host: GitHub
- URL: https://github.com/wkcn/MobulaOP
- Owner: wkcn
- License: mit
- Created: 2018-05-06T12:08:25.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2020-09-27T11:37:25.000Z (over 4 years ago)
- Last Synced: 2024-08-01T22:42:16.793Z (6 months ago)
- Topics: cross-framework, cupy, deep-learning, mxnet, numpy, operators, pytorch, scientific-computing
- Language: Python
- Homepage:
- Size: 600 KB
- Stars: 164
- Watchers: 10
- Forks: 21
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- Awesome-MXNet - MobulaOP
README
# MobulaOP
Linux | Windows | Coverage | Badge
------|---------|----------|------
[![Linux Build Status](https://api.travis-ci.org/wkcn/MobulaOP.svg?branch=master)](https://travis-ci.org/wkcn/MobulaOP)|[![Windows Build Status](https://ci.appveyor.com/api/projects/status/bvnavb8k2xnu0wqj/branch/master?svg=true)](https://ci.appveyor.com/project/wkcn/mobulaop/branch/master)|[![Coverage Status](https://coveralls.io/repos/github/wkcn/MobulaOP/badge.svg?branch=master)](https://coveralls.io/github/wkcn/MobulaOP?branch=master)|[![996.icu](https://img.shields.io/badge/link-996.icu-red.svg)](https://996.icu)## What is it?
*MobulaOP* is a simple and flexible cross framework operators toolkit.You can write custom operators by Python/C++/C/CUDA/HIP/TVM without rebuilding deep learning framework from source.
## How to use it?
[[中文教程](docs/tutorial-cn.md)]
[[Tutorial](docs/tutorial-en.md)]
- Add an addition operator [[Code](examples/MyFirstOP.py)]
```python
import mobula@mobula.op.register
class MyFirstOP:
def forward(self, x, y):
return x + y
def backward(self, dy):
return [dy, dy]
def infer_shape(self, in_shape):
assert in_shape[0] == in_shape[1]
return in_shape, [in_shape[0]]# MXNet
import mxnet as mx
a = mx.nd.array([1, 2, 3])
b = mx.nd.array([4, 5, 6])
c = MyFirstOP(a, b)
print (c) # [5, 7, 9]# PyTorch
import torch
a = torch.tensor([1, 2, 3])
b = torch.tensor([4, 5, 6])
c = MyFirstOP(a, b)
print (c) # [5, 7, 9]# NumPy
import numpy as np
a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
op = MyFirstOP[np.ndarray]()
c = op(a, b)
print (c) # [5, 7, 9]# CuPy
import cupy as cp
a = cp.array([1, 2, 3])
b = cp.array([4, 5, 6])
op = MyFirstOP[cp.ndarray]()
c = op(a, b)
print(c) # [5, 7, 9]
```- Use **custom operators** without rebuilding the source of deep learning framework [[Code](examples/RunROIAlign.py)]
```python
# Use ROIAlign operator
import mxnet as mx
import numpy as np
import mobula# Load ROIAlign Module
mobula.op.load('ROIAlign')ctx = mx.cpu(0)
dtype = np.float32
N, C, H, W = 2, 3, 4, 4data = mx.nd.array(np.arange(N*C*H*W).astype(dtype).reshape((N,C,H,W)))
rois = mx.nd.array(np.array([[0, 1, 1, 3, 3]], dtype = dtype))data.attach_grad()
with mx.autograd.record():
# mx.nd.NDArray and mx.sym.Symbol are both available as the inputs.
output = mobula.op.ROIAlign(data = data, rois = rois, pooled_size = (2,2), spatial_scale = 1.0, sampling_ratio = 1)print (output.asnumpy(), data.grad.asnumpy())
```- Import Custom C++ Operator Dynamically [[Code](examples/dynamic_import_op/dynamic_import_op.py)]
```python
import mobula
# Import Custom Operator Dynamically
mobula.op.load('./AdditionOP')import mxnet as mx
a = mx.nd.array([1,2,3])
b = mx.nd.array([4,5,6])
c = mobula.op.AdditionOP(a, b)print ('a + b = c \n {} + {} = {}'.format(a.asnumpy(), b.asnumpy(), c.asnumpy()))
```## How to get it?
```bash
# Clone the project
git clone https://github.com/wkcn/MobulaOP# Enter the directory
cd MobulaOP# Install MobulaOP
pip install -v -e .
```