Pytorch not compatible with sm_86 CUDA Capability

I was training my model when I ran into this error:

/home/user/miniconda3/envs/wei-pip/lib/python3.8/site-packages/torch/cuda/__init__.py:143: UserWarning: 
NVIDIA GeForce RTX 3070 Laptop GPU with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_35 sm_50 sm_60 sm_61 sm_70 sm_75 compute_50.
If you want to use the NVIDIA GeForce RTX 3070 Laptop GPU GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

  warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))

I checked online and saw this post. I tried the fix suggested by Ilias_Giannakopoulos but it did not work. Any other ideas what I can do?

All of our current binaries support sm_86 and based on the error message it seems you might have installed an old PyTorch binary with CUDA<=10.2 support.

I installed the latest version of Pytorch though (CUDA 12.4).

The latest release works fine, ships with compute capability 8.6, and supports the 30XX series:

python -c "import torch; print(torch.__version__); print(torch.cuda.get_arch_list()); print(torch.cuda.get_device_properties(0)); print(torch.randn(1).cuda())"
2.5.1+cu124
['sm_50', 'sm_60', 'sm_70', 'sm_75', 'sm_80', 'sm_86', 'sm_90']
_CudaDeviceProperties(name='NVIDIA GeForce RTX 3090', major=8, minor=6, total_memory=24249MB, multi_processor_count=82, uuid=7d7b1de4-b35c-b01c-8b9a-e3623cbfecb0, L2_cache_size=6MB)
tensor([-1.2947], device='cuda:0')

It turns out the conda environment I was using was using an older version of PyTorch. Upgrading the version fixed the issue.