Is it possible to train a model using 2 different GPUs (Titan X and RTX2080 Ti) of different arch (Pascal and Turing)?

Is it possible to train a model using 2 different GPUs (Titan X and RTX2080 Ti) of different arch (Pascal and Turing)? It runs fine if I only use one of them.

My code is giving the following error
Traceback (most recent call last): File "train.py", line 329, in <module> main() File "train.py", line 194, in main train(device, data_loader, model, margin_linear, lambda_func, criterion, optimizer, scheduler, args) File "train.py", line 252, in train loss.backward() File "/opt/conda/lib/python3.6/site-packages/torch/tensor.py", line 102, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/opt/conda/lib/python3.6/site-packages/torch/autograd/__init__.py", line 93, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM

The error does not tell me much.

Is your code running without cuDNN?
Could you try to disable it using torch.backends.cudnn.enabled = False and run your script again?

I have solved the problem. It is actually a system error. I have solved it by adding pci=nommconf to the grub settings. Thanks a lot!