https://github.com/guyi2000/preallocate-cuda-memory
This is a code that helps preallocate memory for PyTorch, used for competing for computational resources with others.
https://github.com/guyi2000/preallocate-cuda-memory
ai cuda-memory cuda-memory-allocation pytorch
Last synced: 3 months ago
JSON representation
This is a code that helps preallocate memory for PyTorch, used for competing for computational resources with others.
- Host: GitHub
- URL: https://github.com/guyi2000/preallocate-cuda-memory
- Owner: guyi2000
- License: mit
- Created: 2024-05-29T08:36:31.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-06-02T11:11:53.000Z (about 1 year ago)
- Last Synced: 2025-03-03T00:21:22.232Z (4 months ago)
- Topics: ai, cuda-memory, cuda-memory-allocation, pytorch
- Language: Python
- Homepage:
- Size: 4.88 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Preallocate CUDA memory for pytorch
This is a code that helps preallocate memory for PyTorch, used for competing for computational resources with others.
You can use the following command directly on the command line:
```bash
python -m preallocate_cuda_memory
```Or you can use in python file:
```python
import preallocate_cuda_memory as pcmc = pc.MemoryController(0) # 0 is the GPU index
mc.occupy_all_available_memory()
mc.free_memory()
```If you find any issues, please feel free to contact the author by raising an issue on GitHub.