Computing Backward one layer at a time

Hi,
I have a neural network and want to calculate the backward pass one layer at a time. Let’s say this is the network:

class DeepNN(nn.Module):
    def __init__(self):
        super(DeepNN, self).__init__()
        self.fc_1 = nn.Linear(D_in, H)
        self.fc = nn.Linear (H,H)
        self.fc_end = nn.Linear(H,D_out)

        self.module_list = []

        self.module_list.append(self.fc_1)
        for i in range(num_hidden):
            self.module_list.append(nn.Linear (H,H))

        self.layers = nn.ModuleList(self.module_list)
        self.f = nn.Sequential(*self.module_list)
        self._name = "DeepNN"
        
    
    def forward (self, x):
 
        return self.f(x)

Now, instead of loss.backward(), I want to calculate gradients layer by layer and change gradients of each layer and pass it to the other layer.
Is there a proper way to do that?

Hi,

I would see two ways:

  • Return the output of each layer then use autograd.grad to backward only the part of the network that you want.
  • Use register_hook() on the output of each layer and have the hook function do the calculation you want and return the new gradient value from the hook.
1 Like

Hi @albanD , thank you for your explanation.

I have read your explanation and the Autograd documentation page, but sorry I still don’t understand how to do it. Would you mind giving a small example of how to do it both methods you explained? Thank you very much for your help.

Warm regards,
Reza Qorib