Error at backward: "the number of sizes provided must be greater or equal to the number of dimensions in the tensor"

I got this error during a call to backward():

Traceback (most recent call last):
  File "prova.py", line 60, in <module>
    loss.backward()
  File "/home/simone/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 156, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
  File "/home/simone/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 98, in backward
    variables, grad_variables, retain_graph)
  File "/home/simone/anaconda3/lib/python3.6/site-packages/torch/autograd/function.py", line 91, in apply
    return self._forward_cls.backward(self, *args)
  File "/home/simone/anaconda3/lib/python3.6/site-packages/torch/autograd/_functions/reduce.py", line 26, in backward
    return grad_output.expand(ctx.input_size), None, None
  File "/home/simone/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 722, in expand
    return Expand.apply(self, sizes)
  File "/home/simone/anaconda3/lib/python3.6/site-packages/torch/autograd/_functions/tensor.py", line 111, in forward
    result = i.expand(*new_size)
RuntimeError: invalid argument 1: the number of sizes provided must be greater or equal to the number of dimensions in the tensor at /opt/conda/conda-bld/pytorch_1503970438496/work/torch/lib/TH/generic/THTensor.c:298

I’m using the Pytorch 0.2.0 package from Conda. Here is a short example i wrote which should reproduce the error. Basically I have a model which takes two vectors and computes some kind of loss. During training, at a certain point (after ~100 iterations with Adam and some more with SGD) I get that error. In the script I’m providing, I removed the training part, and it should give the error at the first iteration. Additionally, I put in the script the state of the model at a point where it should give the error anyway: just change if False to if True at line 45.

I noticed that the error only occurs when the final loss is computed as at line 34 or at line 35 (which is currently commented out), and not at line 33, i.e., only when the margin variable is indexed by itself.

1 Like

Apparently the problem occurs when margin contains all zero. I don’t know if that’s expected or a bug, but it makes sense.

Same problem here.
Work fine with 0.1.11

I’m seeing lots of these errors on 0.2 master, as of the big aten merge. Any idea what changed? Will this be fixed before 0.3?

@fritzo this error shouldn’t happen on the v0.3.0 release branch, https://github.com/pytorch/pytorch/tree/v0.3.0

We cut it right before the big ATen merge.
Ping me if you are seeing any issues on that branch.

I encounter this issue in 0.3 too.

I also see this error in 0.3.0.post4.

I calculated loss like below

loss = torch.mean(torch.stack(losses, 0), 0)
print (loss)
loss.backward() # -- error

Variable containing:
 0.2055
 [torch.cuda.FloatTensor of size 1 (GPU 0)]

Traceback (most recent call last):
File "train.py", line 160, in <module>
train(model, train_data, optimizer)
File "train.py", line 121, in train
loss.backward()
File "/home/jef/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 167, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/home/jef/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward
variables, grad_variables, retain_graph)
RuntimeError: invalid argument 1: the number of sizes provided must be greater or equal to the number of dimensions in the tensor at /pytorch/torch/lib/THC/generic/THCTensor.c:309
1 Like

So it looks the following fix is not included in 0.3? I installed pytorch from binary.

After I updated pytorch from source (‘0.4.0a0+63ac363’), the code is working without errors…

Can you please provide the link to update source? I am having the same problem, and don’t have sudo, so I have to provide the admins with the precise update I need them to install. Thank you!

Updating to 0.4 (commit 33bb849a7396de54293edc0ed00ac1dfd07b03ff) also fixed this for me.

I’m experiencing the same issue. How do I update pytorch to 0.4 from minicoda on a MAC ?
Any suggestion?

You have to build Pytorch from source: instruction

I have updated pytorch following the instructions, but now I got the following error:
RuntimeError: expand(torch.FloatTensor{[1, 1, 1]}, size=[60, 1]): the number of sizes provided (2) must be greater or equal to the number of dimensions in the tensor (3)

The error seems related to the fact that I’m summing up (on the time dimension) all the output state of an RNN. Besides from the fact that can be conceptually wrong, any idea why it is triggering this error?

Are you trying to expand the first dimension to 60 or what is your use case?
You could try this code as a reference:

torch.randn(1, 1, 1).expand(60, 1, 1)

Here are some examples on how to use expand.

https://github.com/vanzytay/pytorch_sentiment_rnn/issues/7 RuntimeError: invalid argument 0: Tensors must have same number of dimensions: got 2 and 3 at /opt/conda/conda-bld/pytorch_1518238409320/work/torch/lib/THC/generic/THCTensorMath.cu:102
0.3.1.post2 is torch version and installed it using conda