Everything is running smoothly in my model but for some reason I get a weird error in the torch.cat() backward function. The forward pass and the loss are computing OK, though… Does anyone has a clue of what is going on?
Traceback (most recent call last):
File "/Users/miguel/Documents/Unbabel/pytorch-tools/pytorch_tools/models/slqe.py", line 217, in <module>
loss = net.update(input_source, input_target, tags, input_editor=input_editor, input_client=input_client)
File "/Users/miguel/Documents/Unbabel/pytorch-tools/pytorch_tools/models/slqe.py", line 176, in update
loss.backward()
File "/Users/miguel/Documents/Unbabel/pytorch-tools/venv/lib/python2.7/site-packages/torch/autograd/variable.py", line 146, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File "/Users/miguel/Documents/Unbabel/pytorch-tools/venv/lib/python2.7/site-packages/torch/autograd/_functions/tensor.py", line 314, in backward
in zip(self.input_sizes, _accumulate(self.input_sizes)))
File "/Users/miguel/Documents/Unbabel/pytorch-tools/venv/lib/python2.7/site-packages/torch/autograd/_functions/tensor.py", line 313, in <genexpr>
return tuple(grad_output.narrow(self.dim, end - size, size) for size, end
RuntimeError: out of range at /Users/soumith/code/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensor.c:367
No you don’t, I was just wondering if you had a custom layer after that that could be misbehaving.
But that should work.
Could you make a minimal example that reproduce this error please?
Ho,
Actually the support for negative dimension for all modules has been added recently.
It is on the latest master branch, but may not be in the binary release yet.
If you cannot install from source, change the dim argument you give to the cat function to be positive (with inp.dim()-1)