https://github.com/serengil/gpuutils
GpuUtils: A Simple Tool for GPU Analysis and Allocation
https://github.com/serengil/gpuutils
cuda gpu nvidia nvidia-smi
Last synced: about 2 months ago
JSON representation
GpuUtils: A Simple Tool for GPU Analysis and Allocation
- Host: GitHub
- URL: https://github.com/serengil/gpuutils
- Owner: serengil
- License: mit
- Created: 2020-04-21T06:43:55.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2020-04-23T16:05:03.000Z (over 5 years ago)
- Last Synced: 2025-07-22T23:14:29.753Z (3 months ago)
- Topics: cuda, gpu, nvidia, nvidia-smi
- Language: Python
- Homepage: https://sefiks.com/
- Size: 150 KB
- Stars: 15
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# GpuUtils
[](https://pepy.tech/project/gpuutils)
Working on a shared and distributed environment with multiple GPUs might be problematic. Advanced frameworks apply greedy approach and they tend to allocate all GPUs and all memory of your system. GpuUtils helps you to find the best GPU on your system to allocate. It also provides a gpu related information in a structure format.
## Installation
The easiest way to install GpuUtils is to install it via [PyPI](https://pypi.org/project/gpuutils).
```
pip install gpuutils
```## Analyzing system
Running **nvidia-smi** command in the command prompt allows users to monitor GPU related information such as memory and utilization. Herein, system analysis function loads GPU related information into a pandas data frame or json array.
```python
from gpuutils import GpuUtils
df = GpuUtils.analyzeSystem() #this will return a pandas data frame
#dict = GpuUtils.analyzeSystem(pandas_format = False) #this will return a json array
```Default configuration of system analysis returns a Pandas data frame.
| gpu_index | total_memories_in_mb | available_memories_in_mb | memory_usage_percentage | utilizations | power_usages_in_watts | power_capacities_in_watts |
| --- | --- | --- | --- | --- | --- | --- |
| 1 | 32480 | 32469 | 0.0339 | 0 | 43 | 300 |
| 2 | 32480 | 32469 | 0.0339 | 0 | 43 | 300 |
| 3 | 32480 | 32469 | 0.0339 | 0 | 44 | 300 |
| 4 | 32480 | 32469 | 0.0339 | 0 | 43 | 300 |
| 5 | 32480 | 32469 | 0.0339 | 0 | 43 | 300 |
| 6 | 32480 | 32469 | 0.0339 | 0 | 43 | 300 |
| 7 | 32480 | 32469 | 0.0339 | 0 | 43 | 300 |
| 0 | 32480 | 31031 | 4.4612 | 7 | 56 | 300 |## Allocation
GpuUtils can allocate GPUs as well. Calling allocation function directly finds the available GPUs and allocate based on your demand.
```python
from gpuutils import GpuUtils
GpuUtils.allocate() #this tries to allocate a GPU having 1GB memory
#GpuUtils.allocate(required_memory = 10000)
#GpuUtils.allocate(required_memory = 10000, gpu_count=1)
```# To avoid greedy approach
Advanced frameworks such as TensorFlow tend to allocate all memory. You can avoid this approach if you pass the framework argument in allocate function. In this way, the framework will use the gpu memory as much as needed. Currently, keras and tensorflow frameworks are supported in allocate function.
```python
GpuUtils.allocate(framework = 'keras')
```# Support
There are many ways to support a project - starring⭐️ the GitHub repos is just one.
# Licence
GpuUtils is licensed under the MIT License - see [`LICENSE`](https://github.com/serengil/gpuutils/blob/master/LICENSE) for more details.