NVIDIA GeForce RTX 3050 Ti Laptop GPU with CUDA capability sm_86 is not compatible with the current PyTorch installation

miniconda3/envs/smr_env1/lib/python3.7/site-packages/torch/cuda/init.py:106: UserWarning:
NVIDIA GeForce RTX 3050 Ti Laptop GPU with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
If you want to use the NVIDIA GeForce RTX 3050 Ti Laptop GPU GPU with PyTorch, please check the instructions at Start Locally | PyTorch

All the other tickets specify to use CUDA version 11, but I am already on CUDA Version: 11.4 and still face this error. Any solutions?

You are most likely installing a PyTorch pip wheel or conda binary with the CUDA 10.2 runtime.
Note that your locally installed CUDA toolkit would only be used if you are building PyTorch from source or a custom CUDA extension.

I most likely ran pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html to install PyTorch and specify CUDA version.

The output for PyTorch CUDA version is -

>>> print(torch.version.cuda)
11.3

What would be a likely solution in this case?

I don’t think that’s the case, as the 1.10.0+cu113 pip wheel ships with these compute capabilities:

>>> import torch
>>> torch.__version__
'1.10.0+cu113'
>>> torch.cuda.get_arch_list()
['sm_37', 'sm_50', 'sm_60', 'sm_70', 'sm_75', 'sm_80', 'sm_86']

while yours returns:

sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37

which is the CUDA 10.2 support.

If you get stuck, try to create a new virtual env, reinstall the right version, and verify the right versions in the install logs.

You were right. The env I was using had an incorrect version. Thanks go it working.