Error with GPU0 Tesla K20m with cuda capability 3.5 and pytorch

Hello All,
Need your help in resolving this issue.
I am using (1.11.0+cu10.2) pytorch and cuda version respectively. I am using Tesla K20m GPU currently and facing the below issue.

/home/garimak1/.local/lib/python3.7/site-packages/mmcv/init.py:21: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in
which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See mmcv/compatibility.md at master · open-mmlab/mmcv · GitHub for more details.
'On January 1, 2023, MMCV will release v2.0.0, in which it will remove ’
/umbc/rs/nasa-access/users/garimak1/conda-u/envs/cot_retrieval/lib/python3.7/site-packages/torch/cuda/init.py:122: UserWarning:
Found GPU0 Tesla K20m which is of cuda capability 3.5.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability supported by this library is 3.7.

warnings.warn(old_gpu_warn % (d, name, major, minor, min_arch // 10, min_arch % 10))
Traceback (most recent call last):
File “test.py”, line 269, in
fold = 4, stride=10)
File “test.py”, line 115, in get_profile_pred1
test_loss,patch_pred = get_pred(model=model,X_test=patch,Y_test=label,device=“cuda:0”)
File “/umbc/xfs1/student/users/garimak1/project/ver0.1/DL_3d_cloud_retrieval-main/COT_retrievals_from_LES_cloud_scenes_reflectances/utilities/utilities.py”, line 297, in get_pred
output = model(X_test)
File “/umbc/rs/nasa-access/users/garimak1/conda-u/envs/cot_retrieval/lib/python3.7/site-packages/torch/nn/modules/module.py”, line
1110, in _call_impl
return forward_call(*input, **kwargs)
File “/umbc/xfs1/student/users/garimak1/project/ver0.1/DL_3d_cloud_retrieval-main/COT_retrievals_from_LES_cloud_scenes_reflectances/model_config/DNN2w.py”, line 46, in forward
x = self.activation(self.conv1(x))
File “/umbc/rs/nasa-access/users/garimak1/conda-u/envs/cot_retrieval/lib/python3.7/site-packages/torch/nn/modules/module.py”, line
1110, in _call_impl
return forward_call(*input, **kwargs)
File “/umbc/rs/nasa-access/users/garimak1/conda-u/envs/cot_retrieval/lib/python3.7/site-packages/torch/nn/modules/conv.py”, line 447, in forward
return self._conv_forward(input, self.weight, self.bias)
File “/umbc/rs/nasa-access/users/garimak1/conda-u/envs/cot_retrieval/lib/python3.7/site-packages/torch/nn/modules/conv.py”, line 444, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Unfortunately this is expected with later binary releases as support for compute capability 3.5 has been dropped. You could try building from source (e.g., as described here [SOLVED] PyTorch no longer supports this GPU because it is too old) or try installing older binaries that would support compute capability 3.5: Previous PyTorch Versions | PyTorch.