Gradient in the input

Hi there,

Apologies if this is a basic question. I did go a bit of search but didn’t find quite what I have in mind.

Given a model (the forward function) and a given loss, is it possible to find the gradient in the input layer?

For example, take AllenNLP’s BiDAF model. it has a forward function which defines the network structure, given the inputs, and at some point, it also defines the loss function.

The question is, for a fixed input, how can I get the gradients in the input layer?

Do you just want the gradient for the first layer (input layer) or the input tensor?
For the former you can just print the layer’s parameter’s grad, while you would need to set requires_grad=True for the latter:

class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        self.fc1 = nn.Linear(20, 10)
        self.fc2 = nn.Linear(10, 2)
        
    def forward(self, x):
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        return x


model = MyModel()
x = torch.randn(1, 20, requires_grad=True)
output = model(x)
output.mean().backward()

print(model.fc1.weight.grad)
print(x.grad)
1 Like