Traceback (most recent call last):
File "prova.py", line 60, in <module>
loss.backward()
File "/home/simone/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 156, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/home/simone/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 98, in backward
variables, grad_variables, retain_graph)
File "/home/simone/anaconda3/lib/python3.6/site-packages/torch/autograd/function.py", line 91, in apply
return self._forward_cls.backward(self, *args)
File "/home/simone/anaconda3/lib/python3.6/site-packages/torch/autograd/_functions/reduce.py", line 26, in backward
return grad_output.expand(ctx.input_size), None, None
File "/home/simone/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 722, in expand
return Expand.apply(self, sizes)
File "/home/simone/anaconda3/lib/python3.6/site-packages/torch/autograd/_functions/tensor.py", line 111, in forward
result = i.expand(*new_size)
RuntimeError: invalid argument 1: the number of sizes provided must be greater or equal to the number of dimensions in the tensor at /opt/conda/conda-bld/pytorch_1503970438496/work/torch/lib/TH/generic/THTensor.c:298
I’m using the Pytorch 0.2.0 package from Conda. Here is a short example i wrote which should reproduce the error. Basically I have a model which takes two vectors and computes some kind of loss. During training, at a certain point (after ~100 iterations with Adam and some more with SGD) I get that error. In the script I’m providing, I removed the training part, and it should give the error at the first iteration. Additionally, I put in the script the state of the model at a point where it should give the error anyway: just change if False to if True at line 45.
I noticed that the error only occurs when the final loss is computed as at line 34 or at line 35 (which is currently commented out), and not at line 33, i.e., only when the margin variable is indexed by itself.
loss = torch.mean(torch.stack(losses, 0), 0)
print (loss)
loss.backward() # -- error
Variable containing:
0.2055
[torch.cuda.FloatTensor of size 1 (GPU 0)]
Traceback (most recent call last):
File "train.py", line 160, in <module>
train(model, train_data, optimizer)
File "train.py", line 121, in train
loss.backward()
File "/home/jef/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 167, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/home/jef/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward
variables, grad_variables, retain_graph)
RuntimeError: invalid argument 1: the number of sizes provided must be greater or equal to the number of dimensions in the tensor at /pytorch/torch/lib/THC/generic/THCTensor.c:309
Can you please provide the link to update source? I am having the same problem, and don’t have sudo, so I have to provide the admins with the precise update I need them to install. Thank you!
I have updated pytorch following the instructions, but now I got the following error:
RuntimeError: expand(torch.FloatTensor{[1, 1, 1]}, size=[60, 1]): the number of sizes provided (2) must be greater or equal to the number of dimensions in the tensor (3)
The error seems related to the fact that I’m summing up (on the time dimension) all the output state of an RNN. Besides from the fact that can be conceptually wrong, any idea why it is triggering this error?
https://github.com/vanzytay/pytorch_sentiment_rnn/issues/7 RuntimeError: invalid argument 0: Tensors must have same number of dimensions: got 2 and 3 at /opt/conda/conda-bld/pytorch_1518238409320/work/torch/lib/THC/generic/THCTensorMath.cu:102
0.3.1.post2 is torch version and installed it using conda