Hey,
Let’s say I have one trained neural network and want to train another one with the exact same topology.
In the 2nd network’s loss function I’ll have a base loss function like MSE and I want to extend it and add something else to the loss. This “something” is the similarity between both networks’ parameters.
For now I just defined similarity as 1 / sum(abs(old model - new model)). So if the networks had the exact same parameters this value would be infinity. The loss function function is supposed to force the 2nd network to learn something different.
How would I go about doing this? I obviously have to write a custom loss function that adds this term in the forward function. I’m a little confused however on how to make sure autograd computes the correct gradient.
autograd has to somehow understand the network parameters’ influence on this additional loss term in order to compute the correct gradient.
The pameters() function returns a generator that I can iterate through to get a network’s parameters. These wouldn’t be variables though.
Should I rather get the linear layers from the model directly and access the model parameters via the .weight attribute?
I think my main problem is that I don’t really understand how autograd works and when exactly autograd can figure out the gradient.
Any help would be appreciated!