https://github.com/nomi30701/yolo-segmentation-onnxruntime-web
Yolo instance segmentation webapp, Power by onnxruntime-web, Support WebGPU, wasm(cpu)
https://github.com/nomi30701/yolo-segmentation-onnxruntime-web
gpu-acceleration instance-segmentation onnxruntime-gpu onnxruntime-web opencv-js react wasm webapp webgpu yolo11 yolov11
Last synced: 2 months ago
JSON representation
Yolo instance segmentation webapp, Power by onnxruntime-web, Support WebGPU, wasm(cpu)
- Host: GitHub
- URL: https://github.com/nomi30701/yolo-segmentation-onnxruntime-web
- Owner: nomi30701
- License: mit
- Created: 2025-03-10T16:54:35.000Z (2 months ago)
- Default Branch: main
- Last Pushed: 2025-03-10T17:00:40.000Z (2 months ago)
- Last Synced: 2025-03-10T18:21:37.441Z (2 months ago)
- Topics: gpu-acceleration, instance-segmentation, onnxruntime-gpu, onnxruntime-web, opencv-js, react, wasm, webapp, webgpu, yolo11, yolov11
- Language: JavaScript
- Homepage: https://nomi30701.github.io/yolo-segmentation-onnxruntime-web/
- Size: 53 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# YOLO11 Instance Segmentation Browser onnxruntime-web
This is yolo model Instance Segmentation web app, powered by ONNXRUNTIME-WEB.
Support Webgpu and wasm(cpu).
## Models
### Available Yolo Models
| Model | Input Size | Param. |
| :----------------------------------------------------- | :--------: | :----: |
| [YOLO11-N](https://github.com/ultralytics/ultralytics) | 640 | 2.6M |
| [YOLO11-S](https://github.com/ultralytics/ultralytics) | 640 | 9.4M |## Setup
```bash
git clone https://github.com/nomi30701/yolo-segmentation-onnxruntime-web.git
cd yolo-segmentation-onnxruntime-web
yarn install # install dependencies
```
## Scripts
```bash
yarn dev # start dev server
```## Use other YOLO model
1. Conver YOLO model to onnx format. Read more on [Ultralytics](https://docs.ultralytics.com/).
```Python
from ultralytics import YOLO# Load a model
model = YOLO("yolo11n-seg.pt")# Export the model
model.export(format="onnx", opset=12, dynamic=True)
```
2. Copy your yolo model to `./public/models` folder. (Also can click **`Add model`** button)
3. Add `` HTML element in `App.jsx`,`value="YOUR_FILE_NAME"`.
```HTML
...
CUSTOM-MODEL
yolo11n-2.6M
yolo11s-9.4M
...
```
4. select your model on page.
5. DONE!👍
> ✨ Support Webgpu
>
> For onnx format support Webgpu, export model set **`opset=12`**.> ✨ Dynamic input size
>
> If doesn't need dynamic input, please comment out in **`/utils/inference_pipeline.js`**
> ```Javascript
> const [src_mat_preProcessed, xRatio, yRatio] = await preProcess(
> src_mat,
> sessionsConfig.input_shape[2],
> sessionsConfig.input_shape[3]
> );
> ```
>
> And delete
> ```Javascript
> const [src_mat_preProcessed, div_width, div_height] = preProcess_dynamic(src_mat);
> const xRatio = src_mat.cols / div_width;
> const yRatio = src_mat.rows / div_height;
> ```
> Change Tensor size
> ```Javascript
> const input_tensor = new ort.Tensor("float32", src_mat_preProcessed.data32F, [
> 1,
> 3,
> sessionsConfig.input_shape[3],
> sessionsConfig.input_shape[2],
> ]);