How to determine the highest pytorch-cuda version that my VM supports

I am using miniconda on a virtual machine. I installed pytorch with cuda=11.8. It works fine. Recently, I found out the cuda version on my VM is only 11.6.

Running nvidia-smi in terminal returns a table containg NVIDIA-SMI 510.73.08 Driver Version: 510.73.08 CUDA Version: 11.6.

Since the pytorch I installed is working fine. I guess the pytorch cuda is independent of the cuda on my VM referred by nvidia-smi. Please correct me if this is wrong and I should lower my pytorch cuda version.

In general, how to determine the highest pytorch-cuda version that my VM support? Is it determined by the driver version in the table returned by nvidia-smi?

From CUDA compatibility, CUDA 12.x needs driver version >=525.60.13. So in my case, I cannot install the preview version of pytorch, pytorch-cuda=12.1?

That’s correct since PyTorch binaries ship with their own CUDA dependencies and only need a properly installed NVIDIA driver to run workloads on your GPU.

Yes, you would need to install the right driver, but also note that CUDA supports minor version compatibility, allowing you to stick to the same driver for a CUDA major release.
E.g. you could have used 470.xx for all PyTorch binaries shipping with any CUDA 11.x version without updating the driver, and you could use e.g. 525.xx for all PyTorch binaries shipping with any CUDA 12.x or older CUDA toolkit without updating the driver.
Note however, that you must update the NVIDIA driver when jumping from CUDA 11 to CUDA 12 as this is a major version update.

Yes, this should be the case unless you are using datacenter GPUs and apply forward compatibility.