{"id":13869918,"url":"https://github.com/matthewfeickert/nvidia-gpu-ml-library-test","last_synced_at":"2025-04-15T07:56:42.461Z","repository":{"id":41964217,"uuid":"326604250","full_name":"matthewfeickert/nvidia-gpu-ml-library-test","owner":"matthewfeickert","description":"Simple tests for JAX, PyTorch, and TensorFlow to test if the installed NVIDIA drivers are being properly picked up","archived":false,"fork":false,"pushed_at":"2025-04-08T13:57:07.000Z","size":27,"stargazers_count":16,"open_issues_count":0,"forks_count":4,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-04-15T07:56:21.166Z","etag":null,"topics":["cuda","cudnn","gpu","jax","nvidia","pytorch","setup","tensorflow","torch"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/matthewfeickert.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2021-01-04T07:33:14.000Z","updated_at":"2025-04-08T13:57:10.000Z","dependencies_parsed_at":"2024-04-01T21:42:44.638Z","dependency_job_id":"bcc4dce4-2a69-47c4-b01c-89c64dd4283a","html_url":"https://github.com/matthewfeickert/nvidia-gpu-ml-library-test","commit_stats":null,"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/matthewfeickert%2Fnvidia-gpu-ml-library-test","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/matthewfeickert%2Fnvidia-gpu-ml-library-test/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/matthewfeickert%2Fnvidia-gpu-ml-library-test/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/matthewfeickert%2Fnvidia-gpu-ml-library-test/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/matthewfeickert","download_url":"https://codeload.github.com/matthewfeickert/nvidia-gpu-ml-library-test/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":249031777,"owners_count":21201357,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cuda","cudnn","gpu","jax","nvidia","pytorch","setup","tensorflow","torch"],"created_at":"2024-08-05T20:01:21.909Z","updated_at":"2025-04-15T07:56:42.440Z","avatar_url":"https://github.com/matthewfeickert.png","language":"Python","readme":"# NVIDIA GPU ML library test\n\nSimple tests for JAX, PyTorch, and TensorFlow to test if the installed NVIDIA drivers are being properly picked up.\n\n[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/matthewfeickert/nvidia-gpu-ml-library-test/main.svg)](https://results.pre-commit.ci/latest/github/matthewfeickert/nvidia-gpu-ml-library-test/main)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\n## Requirements\n\nThese instructions assume working on [Ubuntu 20.04 LTS](https://releases.ubuntu.com/20.04/).\n\n- Computer with NVIDIA GPU installed.\n- Linux operating system (assumed to be an Ubuntu LTS) with root access.\n- Python 3.6+ installed  (recommended through [pyenv](https://github.com/pyenv/pyenv) for easy configuration).\n\n**Example:**\n\nThis setup has been tested on the following systems:\n\n* Dell XPS 15 9510 laptop\n   - OS: Ubuntu 22.04\n   - CPU: 11th Gen Intel Core i9-11900H @ 16x 4.8GHz\n   - GPU: NVIDIA GeForce RTX 3050 Ti Laptop GPU\n   - NVIDIA Driver: 535\n   - Python: 3.10.6 built from source\n* Custom built desktop\n   - OS: Ubuntu 20.04\n   - CPU: AMD Ryzen 9 3900X 12-Core @ 24x 3.906GHz\n   - GPU: GeForce RTX 2080 Ti\n   - NVIDIA Driver: 455\n   - Python: 3.8.6 built from source\n\n## Setup\n\n### Installing Python Libraries\n\nCreate a Python virtual environment and install the base libraries from the relevant `requirements.txt` files.\n\nExamples:\n\n* To install the relevant JAX libraries for use with NVIDIA GPUs\n\n```console\npython -m pip install -r requirements-jax.txt\n```\n\n* To install the relevant JAX libraries for use with Apple silicon GPUs\n\n```console\npython -m pip install -r requirements-jax-metal.txt\n```\n\n### Installing NVIDIA Drivers and CUDA Libraries\n\n#### Ubuntu NVIDIA Drivers\n\n##### Ubuntu's Software \u0026 Updates Utility\n\nThe easiest way to determine the correct NVIDIA driver for your system is to have it determine it automatically through Ubuntu's [Software \u0026 Updates utility and selecting the Drivers tab](https://wiki.ubuntu.com/SoftwareAndUpdatesSettings).\n\n\u003e The \"Drivers\" tab should begin with a listbox containing a progress bar and the text \"Searching for available drivers…\" until the search is complete.\n\u003e Once the search is complete, the listbox should list each device for which proprietary drivers could be installed.\n\u003e Each item in the list should have an indicator light: green if a driver tested with that Ubuntu release is being used, yellow if any other driver is being used, or red if no driver is being used.\n\nSelect the recommended NVIDIA driver from the list (proprietary, tested) and then select \"Apply Changes\" to install the driver.\nAfter the driver has finished installing, restart the computer to verify the driver has been installed successfully.\nIf you run\n\n```console\nnvidia-smi\n```\n\nfrom the command line the displayed driver version should match the one you installed.\n\n**N.B.:** To check all the GPUs that are currently visible to NVIDIA you can use\n\n```console\nnvidia-smi --list-gpus\n```\n\nSee the output of `nvidia-smi --help` for more details.\n\n**Example:**\n\n```console\n$ nvidia-smi --list-gpus\nGPU 0: NVIDIA GeForce RTX 3050 Ti Laptop GPU (UUID: GPU-9b3a1382-1fb8-43c7-67b1-c28af22b6767)\n```\n\n##### Command Line\n\nAlternatively, if you are running headless or over a remote connection you can determine and install the correct driver from the command line.\nFrom the command line run\n\n```console\nubuntu-drivers devices\n```\n\nto get a list of all devices on the machine that need drivers and the recommended drivers.\n\n**Example:**\n\n```console\n$ ubuntu-drivers devices\n== /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0 ==\nmodalias : pci:v000010DEd000025A0sv00001028sd00000A61bc03sc02i00\nvendor   : NVIDIA Corporation\nmodel    : GA107M [GeForce RTX 3050 Ti Mobile]\ndriver   : nvidia-driver-535-server-open - distro non-free\ndriver   : nvidia-driver-525-open - distro non-free\ndriver   : nvidia-driver-535-server - distro non-free\ndriver   : nvidia-driver-535 - distro non-free recommended\ndriver   : nvidia-driver-470-server - distro non-free\ndriver   : nvidia-driver-535-open - distro non-free\ndriver   : nvidia-driver-470 - distro non-free\ndriver   : nvidia-driver-525-server - distro non-free\ndriver   : nvidia-driver-525 - distro non-free\ndriver   : xserver-xorg-video-nouveau - distro free builtin\n```\n\nYou can now either install the supported driver you want directly through `apt`\n\n**Example:**\n\n```console\nsudo apt-get install nvidia-driver-535\n```\n\nor you can let `ubnutu-driver` install the recommended driver for you automatically\n\n```console\nsudo ubuntu-drivers autoinstall\n```\n\n#### NVIDIA CUDA Toolkit\n\nAfter installing the NVIDIA driver, the [NVIDIA CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit) also needs to be installed.\nThis needs to be done every time you update the NVIDIA driver.\nThis can be done manually by following the instructions on the NVIDIA website, but it can also be done automatically through `apt` installing the [Ubuntu package `nvidia-cuda-toolkit`](https://packages.ubuntu.com/search?keywords=nvidia-cuda-toolkit).\n\n```console\nsudo apt-get update -y\nsudo apt-get install -y nvidia-cuda-toolkit\n```\n\n**Example:**\n\n```console\n$ apt show nvidia-cuda-toolkit | head -n 5\nPackage: nvidia-cuda-toolkit\nVersion: 11.5.1-1ubuntu1\nPriority: extra\nSection: multiverse/devel\nOrigin: Ubuntu\n```\n\nAfter the NVIDIA CUDA Toolkit is installed restart the computer.\n\n**N.B.:** If the NVIDIA drivers are ever changed the NVIDIA CUDA Toolkit will need to be reinstalled.\n\n\nNow that the system NVIDIA drivers are installed the necessary requirements can be stepped through or the different machine learning backends in order (from easiest to hardest).\n\n#### PyTorch\n\nPyTorch makes things very easy by [packaging all of the necessary CUDA libraries with its binary distributions](https://discuss.pytorch.org/t/newbie-question-what-are-the-prerequisites-for-running-pytorch-with-gpu/698/3) (which is why they are so huge).\nSo by `pip` installing the `torch` wheel all necessary libraries are installed.\n\n\n#### JAX\n\nThe [CUDA and CUDNN release wheels can be installed from PyPI and Google with `pip`](https://github.com/google/jax/blob/6eb3096461abdbf622df5ebeee57ee40bdfb66b0/README.md#pip-installation-gpu-cuda-installed-via-pip-easier)\n\n```console\npython -m pip install --upgrade \"jax[cuda12_pip]\" --find-links https://storage.googleapis.com/jax-releases/jax_cuda_releases.html\n```\n\n##### With local CUDA installations\n\nTo instead install the `jax` and `jaxlib` but use locally installed CUDA and CUDNN versions follow the instructions in the [JAX README](https://github.com/google/jax/blob/main/README.md).\nIn these circumstances to test the location of the installed CUDA release you can set the following environment variable before importing JAX\n\n```console\nXLA_FLAGS=--xla_gpu_cuda_data_dir=/path/to/cuda\n```\n\n**Example:**\n\n```console\nXLA_FLAGS=--xla_gpu_cuda_data_dir=/usr/lib/cuda/ python jax_MNIST.py\n```\n\n#### TensorFlow\n\n**WARNING:** This section will be out of date fast, so you'll have to adopt it for your particular circumstances.\n\nTensorFlow requires the [NVIDIA cuDNN](https://developer.nvidia.com/CUDNN) closed source libraries, which are a pain to get and have quite bad documentation.\nTo download the libraries you will need to make an account with NVIDIA and register as a developer, which is also a bad experience.\nOnce you've done that go to the [cuDNN download page](https://developer.nvidia.com/rdp/cudnn-download), agree to the Software License Agreement, and the select the version of cuDNN that matches the version of CUDA your **operating system** has (**the version from `nvidia-smi`** which is not necessarily the same as the version from `nvcc --version`)\n\n**Example:**\n\nFor the choices of\n\n- cuDNN v8.2.2 for CUDA 11.4\n- cuDNN v8.2.2 for CUDA 10.2\n\n```console\n$ nvidia-smi | grep \"CUDA Version\"\n| NVIDIA-SMI 470.57.02    Driver Version: 470.57.02    CUDA Version: 11.4     |\n```\n\nwould indicate that cuDNN v8.2.2 for CUDA 11.4 is the recommended version.\n(This is verified by noting that when clicked on the entry for cuDNN v8.2.2 for CUDA 11.4 lists support for Ubuntu 20.04, but the entry for cuDNN v8.2.2 for CUDA 10.2 lists support only for Ubuntu 18.04.)\n\nClick on the cuDNN release you want to download to see the available libraries for supports system architectures.\nAs these instructions are using Ubuntu, download the tarballs and Debian binaries for cuDNN library and the cuDNN runtime library, developer library, and code samples.\n\n**Example:**\n\n- cuDNN Library for Linux (x86_64)\n- cuDNN Runtime Library for Ubuntu20.04 x86_64 (Deb)\n- cuDNN Developer Library for Ubuntu20.04 x86_64 (Deb)\n- cuDNN Code Samples and User Guide for Ubuntu20.04 x86_64 (Deb)\n\nOnce all the libraries are downloaded locally refer to the [directions for installing on Linux](https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#installlinux) in the [cuDNN installation guide](http://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html).\nThe documentation refers to a CUDA directory path (which they generically call `/usr/local/cuda`) and a download path for all of the cuDNN libraries (referred to as `\u003ccudnnpath\u003e`).\nFor the CUDA directory path we _could_ use our existing symlink of `/usr/local/cuda-10.1`, but the cuDNN examples all assume the path is `/usr/local/cuda` so it is easier to make a new symlink of `/usr/local/cuda` pointing to `/usr/lib/cuda`.\n\n```console\nsudo ln -s /usr/lib/cuda /usr/local/cuda\n```\n\nThe examples are also going to assume that `nvcc` is at `/usr/local/cuda/bin/nvcc` and `cuda.h` is at `/usr/local/cuda/include/cuda.h`, so make additional symlinks of those paths pointing to `/usr/bin/nvcc` and `/usr/include/cuda.h`\n\n```console\nsudo ln -s /usr/bin/nvcc /usr/local/cuda/bin/nvcc\nsudo ln -s /usr/include/cuda.h /usr/local/cuda/include/cuda.h\n```\n\n#### Install cuDNN Library\n\n1. Navigate to your `\u003ccudnnpath\u003e` directory containing the cuDNN tar file (example: `cudnn-11.4-linux-x64-v8.2.2.26.tgz`)\n2. Untar the cuDNN library tarball (the untarred directory name is `cuda`)\n\n```console\ntar -xzvf cudnn-*-linux-x64-v*.tgz\n```\n\n3. Copy the library files into the CUDA Toolkit directory\n\n```console\nsudo cp cuda/include/cudnn*.h /usr/local/cuda/include\nsudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64\n```\n\n4. Set the permissions for the files to be universally readable\n\n```console\nsudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*\n```\n\n#### Install cuDNN Runtime and Developer Libraries\n\nTo use in your applications the cuDNN runtime library, developer library, and code samples should be installed too.\nThis can be done with `apt install` from your `\u003ccudnnpath\u003e`.\n\n**Example:**\n\n```console\nsudo apt install ./libcudnn8_8.2.2.26-1+cuda11.4_amd64.deb\nsudo apt install ./libcudnn8-dev_8.2.2.26-1+cuda11.4_amd64.deb\nsudo apt install ./libcudnn8-samples_8.2.2.26-1+cuda11.4_amd64.deb\n```\n\n#### Test cuDNN Installation\n\nCopy the cuDNN samples to a writable path\n\n```console\ncp -r /usr/src/cudnn_samples_v8/ $PWD\n```\n\nthen navigate to the `mnistCUDNN` sample directory and compile and run the sample\n\n```console\ncd cudnn_samples_v8/mnistCUDNN\nmake clean \u0026\u0026 make\n./mnistCUDNN\n```\n\nIf everything is setup correctly then the resulting output should conclude with\n\n```\nTest passed!\n```\n\n#### Adding CUDA and cuDNN to PATHs\n\nThe installed libraries should also be known added to `PATH` and `LD_LIBARY_PATH`, so add the following to your `~/.profile` to be loaded at system login\n\n```bash\n# Add CUDA Toolkit 10.1 to PATH\n# /usr/local/cuda-10.1 should be a symlink to /usr/lib/cuda\nif [ -d \"/usr/local/cuda-10.1/bin\" ]; then\n    PATH=\"/usr/local/cuda-10.1/bin:${PATH}\"; export PATH;\nelif [ -d \"/usr/lib/cuda/bin\" ]; then\n    PATH=\"/usr/lib/cuda/bin:${PATH}\"; export PATH;\nfi\n# Add cuDNN to LD_LIBRARY_PATH\n# /usr/local/cuda should be a symlink to /usr/lib/cuda\nif [ -d \"/usr/local/cuda/lib64\" ]; then\n    LD_LIBRARY_PATH=\"/usr/local/cuda/lib64:${LD_LIBRARY_PATH}\"; export LD_LIBRARY_PATH;\nelif [ -d \"/usr/lib/cuda/lib64\" ]; then\n    LD_LIBRARY_PATH=\"/usr/lib/cuda/lib64:${LD_LIBRARY_PATH}\"; export LD_LIBRARY_PATH;\nfi\n```\n\n#### Check TensorFlow Version Restrictions\n\nTensorFlow does not really respect semvar as minor releases act essentially as major releases with breaking changes.\nThis comes into play when considering the [tested build configurations for CUDA and cuDNN versions](https://www.tensorflow.org/install/source#linux).\nFor example, looking at supported ranges for TensorFlow `v2.3.0` through `v2.5.0`\n\n| Version          | Python version | Compiler  | Build tools | cuDNN | CUDA |\n|------------------|----------------|-----------|-------------|-------|------|\n| tensorflow-2.5.0 | 3.6-3.9        | GCC 7.3.1 | Bazel 3.7.2 | 8.1   | 11.2 |\n| tensorflow-2.4.0 | 3.6-3.8        | GCC 7.3.1 | Bazel 3.1.0 | 8.0   | 11.0 |\n| tensorflow-2.3.0 | 3.5-3.8        | GCC 7.3.1 | Bazel 3.1.0 | 7.6   | 10.1 |\n\nit is seen that for our example of\n\n```console\n$ nvcc --version\nnvcc: NVIDIA (R) Cuda compiler driver\nCopyright (c) 2005-2019 NVIDIA Corporation\nBuilt on Sun_Jul_28_19:07:16_PDT_2019\nCuda compilation tools, release 10.1, V10.1.243\n```\n\nonly TensorFlow `v2.3.0` will be compatible with out installation.\nHowever, TensorFlow `v2.3.0` requires cuDNN `v7.X` (`libcudnn.so.7`) and we have cuDNN `v8.x` (`libcudnn.so.8`).\nThe NVIDIA [cuDNN installation documentation notes](https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#upgrade) that\n\n\u003e Since version 8 can coexist with previous versions of cuDNN, if the user has an older version of cuDNN such as v6 or v7, installing version 8 will not automatically delete an older revision.\n\nWhile we could go and try to install cuDNN `v7.6` from the [cuDNN archives](https://developer.nvidia.com/rdp/cudnn-archive) it turns out that [TensorFlow is okay with](https://github.com/tensorflow/tensorflow/issues/20271#issuecomment-643296453) symlinking `libcudnn.so.8` to a target of `libcudnn.so.7`, so until this causes problems move forward with this approach\n\n```console\nsudo ln -s /usr/lib/cuda/lib64/libcudnn.so.8 /usr/local/cuda/lib64/libcudnn.so.7\n```\n\nYou should now have a directory structure for `usr/local/cuda` that looks something like the following\n\n```console\n$ tree /usr/local/cuda\n/usr/local/cuda\n├── bin\n│   └── nvcc -\u003e /usr/bin/nvcc\n├── include\n│   ├── cuda.h -\u003e /usr/include/cuda.h\n│   ├── cudnn_adv_infer.h\n│   ├── cudnn_adv_infer_v8.h\n│   ├── cudnn_adv_train.h\n│   ├── cudnn_adv_train_v8.h\n│   ├── cudnn_backend.h\n│   ├── cudnn_backend_v8.h\n│   ├── cudnn_cnn_infer.h\n│   ├── cudnn_cnn_infer_v8.h\n│   ├── cudnn_cnn_train.h\n│   ├── cudnn_cnn_train_v8.h\n│   ├── cudnn.h\n│   ├── cudnn_ops_infer.h\n│   ├── cudnn_ops_infer_v8.h\n│   ├── cudnn_ops_train.h\n│   ├── cudnn_ops_train_v8.h\n│   ├── cudnn_v8.h\n│   ├── cudnn_version.h\n│   └── cudnn_version_v8.h\n├── lib64\n│   ├── libcudnn_adv_infer.so\n│   ├── libcudnn_adv_infer.so.8\n│   ├── libcudnn_adv_infer.so.8.2.2\n│   ├── libcudnn_adv_train.so\n│   ├── libcudnn_adv_train.so.8\n│   ├── libcudnn_adv_train.so.8.2.2\n│   ├── libcudnn_cnn_infer.so\n│   ├── libcudnn_cnn_infer.so.8\n│   ├── libcudnn_cnn_infer.so.8.2.2\n│   ├── libcudnn_cnn_infer_static.a\n│   ├── libcudnn_cnn_infer_static_v8.a\n│   ├── libcudnn_cnn_train.so\n│   ├── libcudnn_cnn_train.so.8\n│   ├── libcudnn_cnn_train.so.8.2.2\n│   ├── libcudnn_cnn_train_static.a\n│   ├── libcudnn_cnn_train_static_v8.a\n│   ├── libcudnn_ops_infer.so\n│   ├── libcudnn_ops_infer.so.8\n│   ├── libcudnn_ops_infer.so.8.2.2\n│   ├── libcudnn_ops_train.so\n│   ├── libcudnn_ops_train.so.8\n│   ├── libcudnn_ops_train.so.8.2.2\n│   ├── libcudnn.so\n│   ├── libcudnn.so.7 -\u003e /usr/lib/cuda/lib64/libcudnn.so.8\n│   ├── libcudnn.so.8\n│   ├── libcudnn.so.8.2.2\n│   ├── libcudnn_static.a\n│   └── libcudnn_static_v8.a\n├── nvvm\n│   └── libdevice -\u003e ../../nvidia-cuda-toolkit/libdevice\n└── version.txt\n\n5 directories, 49 files\n```\n\nWith this final set of libraries installed restart your computer.\n\n## Testing\n\n### Detect GPU\n\nFor all of the ML libraries you can now run the `x_detect_GPU.py` tests which test that the library can properly access the GPU and CUDA, where `x` is the library name/nickname.\n\n### MNIST\n\nFor all of the ML libraries you can run a simple MNIST test by running `x_MNIST.py`, where `x` is the library name/nickname.\n\n### Monitoring\n\nIt is worthwhile in another terminal to watch the GPU performance with `nvidia-smi` while running tests\n\n```console\nwatch --interval 0.5 nvidia-smi\n```\n\n## Notes\n\n### Useful Sites\n\n- The [JAX README](https://github.com/google/jax)\n- [TensorFlow GPU support page](https://www.tensorflow.org/install/gpu) which leads to the **actually useful** listing of [tested build configurations for CUDA and cuDNN versions](https://www.tensorflow.org/install/source#linux)\n\n### Useful GitHub Issues\n\n- [JAX Issue 3984](https://github.com/google/jax/issues/3984): automatic detection for GPU pip install doesn't quite work on ubuntu 20.04\n- [TensorFlow Issue 20271](https://github.com/tensorflow/tensorflow/issues/20271#issuecomment-643296453): ImportError: libcudnn.so.7: cannot open shared object file: No such file or directory\n\n## Acknowledgements\n\nThanks to [Giordon Stark](https://github.com/kratsg/) who greatly helped me scaffold the right approach to this setup, as well as for his help doing system setup comparisons.\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmatthewfeickert%2Fnvidia-gpu-ml-library-test","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmatthewfeickert%2Fnvidia-gpu-ml-library-test","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmatthewfeickert%2Fnvidia-gpu-ml-library-test/lists"}