Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/vc-bonn/charonload
Develop C++/CUDA extensions with PyTorch like Python scripts
https://github.com/vc-bonn/charonload
cmake cpp cuda jit python pytorch torch
Last synced: about 2 months ago
JSON representation
Develop C++/CUDA extensions with PyTorch like Python scripts
- Host: GitHub
- URL: https://github.com/vc-bonn/charonload
- Owner: vc-bonn
- License: mit
- Created: 2024-01-29T15:33:26.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-11-15T19:41:16.000Z (3 months ago)
- Last Synced: 2024-12-16T03:50:57.585Z (about 2 months ago)
- Topics: cmake, cpp, cuda, jit, python, pytorch, torch
- Language: Python
- Homepage: https://vc-bonn.github.io/charonload/
- Size: 506 KB
- Stars: 8
- Watchers: 2
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- License: LICENSE
Awesome Lists containing this project
README
CharonLoad
CharonLoad is a bridge between Python code and rapidly developed custom C++/CUDA extensions to make writing **high-performance research code** with *PyTorch* easy:
- 🔥 PyTorch C++ API detection and linking
- 🔨 Automatic just-in-time (JIT) compilation of the C++/CUDA part
- 📦 Cached incremental builds and automatic clean builds
- 🔗 Full power of CMake for handling C++ dependencies
- ⌨️ Python stub file generation for syntax highlighting and auto-completion in VS Code
- 🐛 Interactive mixed Python/C++ debugging support in VS Code via *Python C++ Debugger* extensionCharonLoad reduces the burden to start writing and experimenting with custom GPU kernels in PyTorch by getting complex boilerplate code and common pitfalls out of your way. Developing C++/CUDA code with CharonLoad feels similar to writing Python scripts and lets you follow the same familiar workflow.
## Installation
CharonLoad requires **Python >=3.9** and can be installed from PyPI:
```sh
pip install charonload
```## Quick Start
CharonLoad only requires minimal changes to existing projects. In particular, a small configuration of the C++/CUDA project is added in the Python part while the CMake and C++ part should adopt some predefined functions:
- `/main.py`
```python
import charonloadVSCODE_STUBS_DIRECTORY = pathlib.Path(__file__).parent / "typings"
charonload.module_config["my_cpp_cuda_ext"] = charonload.Config(
project_directory=pathlib.Path(__file__).parent / "",
build_directory="custom/build/directory", # optional
stubs_directory=VSCODE_STUBS_DIRECTORY, # optional
)import other_module
```- `/other_module.py`
```python
import my_cpp_cuda_ext # JIT compiles and loads the extensiontensor_from_ext = my_cpp_cuda_ext.generate_tensor()
```- `//CMakeLists.txt`
```cmake
find_package(charonload)if(charonload_FOUND)
charonload_add_torch_library(${TORCH_EXTENSION_NAME} MODULE)target_sources(${TORCH_EXTENSION_NAME} PRIVATE src/.cpp)
# Further properties, e.g. link against other 3rd-party libraries, etc.
# ...
endif()
```- ``//src/.cpp``
```cpp
#includetorch::Tensor generate_tensor(); // Implemented somewhere in
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m)
{
m.def("generate_tensor", &generate_tensor, "Optional Python docstring");
}
```## Contributing
If you would like to contribute to CharonLoad, you can find more information in the [Contributing](https://vc-bonn.github.io/charonload/src/contributing.html) guide.
## License
MIT
## Contact
Patrick Stotko - [email protected]