Runtime warning loss.backward resize

I am using the C++ API.
When using either l1_loss or mse_loss I get an annoying warning when calling loss.backward(). Small code snippet:

  auto forward = model->forward(minibatch.features);
  torch::Tensor loss = torch::mse_loss(forward, minibatch.labels);//torch::Tensor loss = torch::l1_loss(forward, minibatch.labels);

The runtime warning I get is as follows:

[W ..\..\aten\src\ATen\native\Resize.cpp:23] Warning: An output with one or more elements was resized since it had shape [64, 1], which does not match the required output shape [64, 64].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (function resize_output_check)

So, in this case my minibatch size is 64, the features and labels are all correctly dimensioned. I do not understand why it is expecting a [64,64] output shape.
I searched for this online, found one topic (DCGAN C++ warning after PyTorch update · Issue #819 · pytorch/examples · GitHub) that deals with this problem, but no good solution seems to be proposed.

Am I overlooking something in the mse/l1 loss functions? Is this a known issue? Is there a way to change the verbosity of warning printing in the C++ API? FYI: the code runs without errors and the neural net seems to fit properly. I use LibTorch version 1.10.2+cpu.

What shapes do the forward and minibatch.labels have?
I’m wondering if you are broadcasting in mse_loss (which could yield wrong results, if that’s not intended).

1 Like

In the case of a minibatch size of 64 forward is a CPUFloatType{64,1}, so it contains a prediction for every sample in the minibatch. minibatch.labels is a CPUFloatType{64}.

So, you are completely right, I was broadcasting in mse_loss. I fixed this by making minibatch.labels a CPUFloatType{64,1}, so it has the same shape as forward.

Thanks for pointing me in the right direction!