So I am trying to multiply my loss by a value: self.strength
, but this strength value doesn’t seem to have the desired effect. For example, if self.strength
is equal to 1000
, the output with my optimizer, is almost the same as having self.strength
be equal to 2000
, 5000
, 15000
, etc…
self.loss = self.crit(input, self.target) * self.strength
And then I use .backward()
self.loss.backward(retain_graph=True)
The other variables I am using are:
self.crit = nn.MSELoss()
self.target = input.detach()
My strength value should result in significantly different results between values like 2000
, 5000
, and 15000
.
This seemed to work in Lua/Torch, so I am at a loss here as to why it’s not working in PyTorch.