In my code, I have 2 model works on different graphics, and caculate the loss between their outputs. I have set input on cuda(0) as well as the src_model, but error occured while training.
RuntimeError: Expected tensor for argument #1 ‘input’ to have the same device as tensor for argument #2 ‘weight’; but device 0 does not equal 1 (while checking arguments for cudnn_convolution)
Did you push the output of your first model to the GPU of the second model?
I tried to create an example in your other thread. Does this not work?
If so, could you provide a small executable code snippet?
the follwing is the code. error occured while caculating src_fc6, src_fc7, src_fc8, src_scoremap = self.src_model(src_inputs, transfer=True)
dst_fc6, dst_fc7, dst_fc8, dst_scoremap = self.dst_model(dst_inputs, transfer=True)
src_fc6, src_fc7, src_fc8, src_scoremap = self.src_model(src_inputs, transfer=True)
src_domain_outputs = self.discrinamitor([src_fc6.to(1), src_fc7.to(1), src_fc8.to(1)])
dst_domain_outputs = self.discrinamitor([dst_fc6, dst_fc7, dst_fc8])
Which line throws this error?
Is self.discriminator
on GPU1?
Could you check this by printing any layer of it?
src_fc6, src_fc7, src_fc8, src_scoremap = self.src_model(src_inputs, transfer=True)
Could you check the device of src_input
and all model parameters? Do you have any other information parameters besides the layers in src_model
?