Traceback (most recent call last):
File “main.py”, line 67, in
network.train()
File “/home/sp/text-classification-cnn/network/cnnTextNetwork.py”, line 115, in train
logit = self.model(feature)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py”, line 206, in call
result = self.forward(*input, **kwargs)
File “/home/sp/text-classification-cnn/model/cnnText/cnntext.py”, line 48, in forward
x = [F.relu(conv(x)).squeeze(3) for conv in self.convs1]
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py”, line 206, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/conv.py”, line 237, in forward
self.padding, self.dilation, self.groups)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/functional.py”, line 39, in conv2d
return f(input, weight, bias)
RuntimeError: tensors are on different GPUs
It seems that the models’ parameters and the input are in different GPUs, It would be better if you could provide more information, like the definition of self.model and how do you process input.
Hi @chenyuntc, Could you tell me more about using nn.ModuleList? I also get a similar error when I only use CPU.
File “/home/zli/WorkSpace/PyWork/Terraref/panicle_detection/faster_rcnn/network.py”, line 16, in forward
x = self.conv(x)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py”, line 206, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/conv.py”, line 237, in forward
self.padding, self.dilation, self.groups)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/functional.py”, line 40, in conv2d
return f(input, weight, bias)
RuntimeError: tensors are on different GPUs