Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/maximedebarbat/dolphin
Dolphin is a python toolkit meant to speed up inference of TensorRT by providing CUDA-Accelerated processing.
https://github.com/maximedebarbat/dolphin
cuda python tensorrt-inference
Last synced: 3 months ago
JSON representation
Dolphin is a python toolkit meant to speed up inference of TensorRT by providing CUDA-Accelerated processing.
- Host: GitHub
- URL: https://github.com/maximedebarbat/dolphin
- Owner: MaximeDebarbat
- License: apache-2.0
- Created: 2023-01-28T09:55:49.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2023-07-20T13:28:41.000Z (over 1 year ago)
- Last Synced: 2024-09-15T16:12:38.525Z (5 months ago)
- Topics: cuda, python, tensorrt-inference
- Language: Python
- Homepage: https://dolphin-python.readthedocs.io/
- Size: 588 KB
- Stars: 3
- Watchers: 1
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
# Dolphin
![Banner](misc/banner.png)
General python package for CUDA accelerated deep learning inference.
- **Documentation** : [ReadTheDoc](https://dolphin-python.readthedocs.io/en/latest/index.html)
- **Source code** : [https://github.com/MaximeDebarbat/Dolphin](https://github.com/MaximeDebarbat/Dolphin)
- **Bug reports** : [https://github.com/MaximeDebarbat/Dolphin/issues](https://github.com/MaximeDebarbat/Dolphin/issues)
- **Getting Starterd** :It provides :
- A set of common image processing functions
- A TensorRT wrapper for easy inference
- Speeds up the inference with CUDA and TensorRT
- An easy to use API with Numpy
- A fast N-Dimensional arrayTesting :
In order to test the package, you will need the library `pytest` which you can run from the root of the project :
```
pytest
```## Install
```
pip install dolphin-python
```## Build
Dolphin can be installed with Pypi (coming soon) or built with Docker which is the recommended way to use it :
```
docker build -f Dockerfile \
--rm \
-t dolphin:latest \
.
```## Docker run
Ensure that you have the `nvidia-docker` package installed and run the following command :
```
docker run \
-it \
--rm \
--gpus all \
-v "$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )":"/app" \
dolphin:latest \
bash
```Please note that Dolphin might not work without the `--gpus all` flag or `--runtime nvidia`.
## Acknowledgements
This project could not have been possible without [PyCuda](https://github.com/inducer/pycuda):
> Andreas Klöckner, Nicolas Pinto, Yunsup Lee, Bryan Catanzaro, Paul Ivanov, Ahmed Fasih, PyCUDA and PyOpenCL: A scripting-based approach to GPU run-time code generation, > Parallel Computing, Volume 38, Issue 3, March 2012, Pages 157-174.
## TODOs
- [ ] Improve `Engine` class in order to support *int8*
- [ ] Use Cython to speed up the code