Rescalling MSELoss

Hi,
I am new to Pytorch so sorry in advance if this is a trivial question. I would simply like to rescale the nn.MSELoss() in the spirit of rescaled_MSE() = C*MSELoss(). I tried to define a tensor C=torch.FloatTensor(1), C[0]= “some number”, but this gives an error because the multiplication does not accept the MSELoss as argument.
How can I rescale the nn.MSELoss() without constructing a loss function myself?
I know that this amounts to rescaling the gradients/learning rate, but I would like to fix the scale of the loss function to separate it conceptrually from the learning rate scale.
Thank you, any comments are appreciated.

1 Like

You could just rescale the loss. Currently it looks like you try to rescale a function, which is probably the workflow in static frameworks (e.g. Theano).
Try the following:

criterion = nn.MSELoss()
scale = torch.tensor([2.0])

# your training procedure
loss = criterion(output, target)
loss = loss * scale
loss.backward()
...

Great! Thanks @ptrblck for your suggestion. The only change is that I need to use
scale = torch.FloatTensor([2.0])
Otherwise it works.

If you are using the current stable release, you don’t have to change this line.
Have a look at the website for install instructions.
There are a lot of improvements and nice features! :wink: