PyTorch for Tesla k40c with cuda 11.2

I am trying to train a model on multiple GPUs. I have 2 Tesla k40c and 1 GeForce GTX 1080. My PyTorch version is 1.7.1 with CUDA 11.2.
PyTorch 1.3.1 onwards has stopped support for GPUs with compute capability of 3.5, which means I am unable to use Tesla k40c. Is there any way I can use Tesla k40c with CUDA 11.2 and PyTorch?

You could build PyTorch from source for the compute capability 3.5 as described here.

As recommended by you I have built PyTorch from source, I am able to train on Tesla k40c but when trying to run on GTX 1080 I get the following error:

GeForce GTX 1080 with CUDA capability sm_61 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_35.
If you want to use the GeForce GTX 1080 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

  warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
Traceback (most recent call last):
  File "<string>", line 1, in <module>
RuntimeError: CUDA error: no kernel image is available for execution on the device

You can specify multiple GPU architectures by building with:

TORCH_CUDA_ARCH_LIST="3.5 6.1" python setup.py install

where you can pass multiple architectures to the env var (as well as +PTX).