RuntimeError: expected CUDA tensor (got CPU tensor)

the train process can caculate the forward loss once and when do the backward, i encounter the follow error:

Epoch\iter\data_size 	 0\0\100000 		 rkl_loss 1117.574 		 Learning rate 0.000001
Traceback (most recent call last):
  File "/home/*/precode.py", line 270, in <module>
    loss.backward()
  File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 152, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
  File "/usr/local/lib/python2.7/dist-packages/torch/autograd/__init__.py", line 98, in backward
    variables, grad_variables, retain_graph)
RuntimeError: expected CUDA tensor (got CPU tensor)

anyone can give me some advice?

Could you post your forward code here, or a minimal example of it that causes the error?
My first advice would be to check that you’re consistently using CUDA or CPU tensors, but it’s strange that
the forward pass doesn’t error out.

thanks, i have solved this error, as you said , there exist some tensor that not CUDA tensor.