the train process can caculate the forward loss once and when do the backward, i encounter the follow error:
Epoch\iter\data_size 0\0\100000 rkl_loss 1117.574 Learning rate 0.000001
Traceback (most recent call last):
File "/home/*/precode.py", line 270, in <module>
loss.backward()
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 152, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/__init__.py", line 98, in backward
variables, grad_variables, retain_graph)
RuntimeError: expected CUDA tensor (got CPU tensor)
anyone can give me some advice?