I am using the C++ API.
When using either l1_loss or mse_loss I get an annoying warning when calling loss.backward(). Small code snippet:
opt.zero_grad(); auto forward = model->forward(minibatch.features); torch::Tensor loss = torch::mse_loss(forward, minibatch.labels);//torch::Tensor loss = torch::l1_loss(forward, minibatch.labels); loss.backward(); opt.step();
The runtime warning I get is as follows:
[W ..\..\aten\src\ATen\native\Resize.cpp:23] Warning: An output with one or more elements was resized since it had shape [64, 1], which does not match the required output shape [64, 64].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (function resize_output_check)
So, in this case my minibatch size is 64, the features and labels are all correctly dimensioned. I do not understand why it is expecting a [64,64] output shape.
I searched for this online, found one topic (DCGAN C++ warning after PyTorch update · Issue #819 · pytorch/examples · GitHub) that deals with this problem, but no good solution seems to be proposed.
Am I overlooking something in the mse/l1 loss functions? Is this a known issue? Is there a way to change the verbosity of warning printing in the C++ API? FYI: the code runs without errors and the neural net seems to fit properly. I use LibTorch version 1.10.2+cpu.