Hi - I have a machine which has two GPUs. I noticed that if I run my code on the the second device (cuda id=1) the code runs for sometimes then error out with the following error message. I’m not doing any memory sharing or multiple GPU programming. I’m using the torch version ‘1.2.0’. Is this an issue with this version? What is a fix for this error?
RuntimeError: arguments are located on different GPUs at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:260
Thank you,
Tomojit