Pythorch version for cuda_12.2!

Please help !

I am using Colab , Python 3.10.12, GPU,
NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2
cuda_12.2.r12.2/compiler.33191640_0
I’m not able to find the Pythorch version for cuda_12.2. I used the installation for cuda_12.1

pip3 install torch torchvision torchaudio

I clone the git code " GitHub - davidnvq/grit: GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)"

When I run the following
!python models/ops/setup.py build develop
I got this error
running build
running build_py
running build_ext
/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py:502: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja… Falling back to using the slow distutils backend.
warnings.warn(msg.format(‘we could not find ninja.’))
/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py:414: UserWarning: The detected CUDA version (12.2) has a minor version mismatch with the version that was used to compile PyTorch (12.1). Most likely this shouldn’t be a problem.
warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py:424: UserWarning: There are no x86_64-linux-gnu-g++ version bounds defined for CUDA version 12.2
warnings.warn(f’There are no {compiler_name} version bounds defined for CUDA version {cuda_str_version}')
running develop
/usr/local/lib/python3.10/dist-packages/setuptools/command/develop.py:40: EasyInstallDeprecationWarning: easy_install command is deprecated.
!!

    ********************************************************************************
    Please avoid running ``setup.py`` and ``easy_install``.
    Instead, use pypa/build, pypa/installer, pypa/build or
    other standards-based tools.

    See https://github.com/pypa/setuptools/issues/917 for details.
    ********************************************************************************

!!
easy_install.initialize_options(self)
/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.
!!

    ********************************************************************************
    Please avoid running ``setup.py`` directly.
    Instead, use pypa/build, pypa/installer, pypa/build or
    other standards-based tools.

    See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
    ********************************************************************************

!!
self.initialize_options()
running egg_info
writing MultiScaleDeformableAttention.egg-info/PKG-INFO
writing dependency_links to MultiScaleDeformableAttention.egg-info/dependency_links.txt
writing top-level names to MultiScaleDeformableAttention.egg-info/top_level.txt
reading manifest file ‘MultiScaleDeformableAttention.egg-info/SOURCES.txt’
writing manifest file ‘MultiScaleDeformableAttention.egg-info/SOURCES.txt’
running build_ext
/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py:414: UserWarning: The detected CUDA version (12.2) has a minor version mismatch with the version that was used to compile PyTorch (12.1). Most likely this shouldn’t be a problem.
warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py:424: UserWarning: There are no x86_64-linux-gnu-g++ version bounds defined for CUDA version 12.2
warnings.warn(f’There are no {compiler_name} version bounds defined for CUDA version {cuda_str_version}')
Creating /usr/local/lib/python3.10/dist-packages/MultiScaleDeformableAttention.egg-link (link to .)
MultiScaleDeformableAttention 1.0 is already the active version in easy-install.pth

What kind of issue are you seeing besides the warning?

Thank you for your reply. other than the warning. Do you mean it should be fine to use “Pythorch” with cuda12.2!. on Colab GPU. I feel I got lost. As I keep getting the errors :
2024-01-10 19:40:50.450395: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-01-10 19:40:50.450456: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-01-10 19:40:50.452587: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-01-10 19:40:52.010412: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT

Yes, your locally installed CUDA 12.2 toolkit should not interfere with the PyTorch binary shipping with CUDA 12.1U1. The other warnings you are seeing are raised by TensorFlow and I’m not familiar with these.