Let’s assume I have a loss L with two components L= L1 + L2
For each convolution in a CNN, I want to update some filters only with L1 and others with L2. How can this be achieved without having to back prop multiple times, which takes forever? Is there a way to decompose the gradient into the two parts corresponding to each term and select only one?
I think one valid approach would be to use
torch.autograd.grad specifying the
inputs for the desired filters and calculate the gradients using both losses separately. Alternatively, the
backward method also accepts the
inputs argument and could be used in a similar way.
Thanks for your reply. I was using the
torch.autograd.grad approach actually, but it is very slow. Isn’t there a way of doing this without having to backprop multiple times?