My model includes two models, a pre-trained model A --> model B.
I have trained model A on GPU0
.
I fixed the model A and train the model on GPU1
.
But I got the following error:
RuntimeError: expected device cuda:0 but got device cuda:1
Can anyone help me? Thanks a lot.
Could you explain your use case a bit more, i.e. which model is located on which GPU?
This error is raised, if you are trying to apply an operation using tensors, which are located on different devices. E.g. if modelA
is on GPU0 and modelB
on GPU1, you would have to make sure the output of modelA
is transferred to GPU1 before passing it to the model.