hello im currently working on a spiking neural network and trying to implement exponential regularization
the cost function on exponential regularization give a loss value per neuron that i then sum before adding it to the loss variable by doing loss = F.nll_loss(output, target) + lossum
however doing this seems to do nothing even when i replace lossum by something like 200000 (wich should make the network behave completely diffently than normal) so i guess this isn’t the right way to do it
how can i add a value to my loss so that it actually is taken into account when backpropagating ?