RuntimeError: Mismatch in shape

I am trying to run train_hopenet.py

python3 train_hopenet.py --dataset AFLW2000 --data_dir datasets/AFLW2000 --filename_list datasets/AFLW2000/files.txt --output_string er

I got this error:

/home/redhwan/.local/lib/python3.8/site-packages/torch/optim/adam.py:90: UserWarning: optimizer contains a parameter group with duplicate parameters; in future, this will cause an error; see github.com/pytorch/pytorch/issues/40967 for more information
  super(Adam, self).__init__(params, defaults)
Ready to train network.
Traceback (most recent call last):
  File "train_hopenet.py", line 193, in <module>
    torch.autograd.backward(loss_seq, grad_seq)
  File "/home/redhwan/.local/lib/python3.8/site-packages/torch/autograd/__init__.py", line 166, in backward
    grad_tensors_ = _make_grads(tensors, grad_tensors_, is_grads_batched=False)
  File "/home/redhwan/.local/lib/python3.8/site-packages/torch/autograd/__init__.py", line 50, in _make_grads
    raise RuntimeError("Mismatch in shape: grad_output["
RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([1]) and output[0] has a shape of torch.Size([]).

How can I solve the issue?

torch.__version__ = 1.12.0+cu102

Thanks a lot in advance!

You might want to check if changing this line:

to grad_seq = [torch.ones([1]).cuda(gpu) for _ in range(len(loss_seq))] (note the added brackets) resolves the issue.

First of all, thank you so much for your help.

Your suggestion didn’t solve the issue.

It is working well by:

grad_seq = [torch.tensor(1, dtype=torch.float) .cuda(gpu) for _ in range(len(loss_seq))]

1 Like