Error while back propagation on GPU

I get following error when backward() is called from my code:

torch/autograd/init.py in backward(variables, grad_variables, retain_graph, create_graph, retain_variables)
Variable._execution_engine.run_backward(
variables, grad_variables, retain_graph)

RuntimeError: Expected object of type Variable[torch.cuda.ByteTensor] but found type Variable[torch.ByteTensor] for argument #1 ‘mask’

I am not using any ByteTensor variable in my code, so not able to understand this.
Any help is appreciated.

Could you provide a little more context behind what you’re doing?

It’s possible that some forward pass you’re using has a ByteTensor somewhere. Are you using DataParallel?

I am solving a seq2seq problem which requires encoder-decoder model. None of my forward pass variables are of Byte type. checked the type of all the variables by:
var.data.type()

One thing you could do is try to gdb through it and figure out what function is throwing that error.

You can do something like:

gdb python
catch throw
run script.py

Thank you very much Richard. I will do this and try to get the exact location of the error.