Adding value to loss variable

hello im currently working on a spiking neural network and trying to implement exponential regularization

the cost function on exponential regularization give a loss value per neuron that i then sum before adding it to the loss variable by doing loss = F.nll_loss(output, target) + lossum
however doing this seems to do nothing even when i replace lossum by something like 200000 (wich should make the network behave completely diffently than normal) so i guess this isn’t the right way to do it

how can i add a value to my loss so that it actually is taken into account when backpropagating ?

1 Like

Hi,

This value is taken into account.
But if it is constant, grad(loss + C) = grad(loss) + grad(C ) = grad(loss). So adding a constant to your loss won’t change the gradients you get.

thank you i guess that makes sense my regularization attempt had a nearly constant value to so i guess thats why it didn’t do anything either