Update network after differentiation with autograd.grad()

Hi,

I want to explicitly learn the gradients of a neural networks output w.r.t. its input.
I coded a graph neural network, but for simplicity, this ffnn is not updating either and I guess its due to the autograd.grad() function.

import torch 

class Feedforward(torch.nn.Module):
        def __init__(self, input_size, hidden_size):
            super(Feedforward, self).__init__()
            self.input_size = input_size
            self.hidden_size  = hidden_size
            self.fc1 = torch.nn.Linear(self.input_size, self.hidden_size)
            self.relu = torch.nn.ReLU()
            self.fc2 = torch.nn.Linear(self.hidden_size, 1)
     
        
        def forward(self, x):
            hidden = self.fc1(x)
            relu = self.relu(hidden)
            output = self.fc2(relu)
            output = output.sum()
            output =torch.autograd.grad(outputs=output, inputs=x, retain_graph=True, create_graph=True)
            return output[0]

test_input = torch.rand((10,3),requires_grad=True)
test_output = torch.rand((10,3))


model = Feedforward(3,10)
optim = torch.optim.Adam(model.parameters())
optim.zero_grad()
loss_fn = torch.nn.L1Loss()
model.train()
out = model(test_input)
loss = loss_fn(out, test_output)
loss.backward()
optim.step() # if you break here and investigate the gradients
             # of the FFNN, the gradients will be 0

How are the gradients of this neural network updated?

Update: The layers receive gradients. My problem has to do something with My GNN.

Maybe the integration in pytorch geometric is breaking