How to correctly backpropogate manual gradients

Hi all,
For my use case, I need to manually calculate the gradient w.r.t to the parameters of my network and do backpropagation.

This is how I’m currently calculating the loss and doing backprogation on the parameters:

network_optimizer.zero_grad()
network_loss = criterion(data)
gradients = network.get_gradient() #Manual gradient calculation

for param in network.parameters():
     param.grad = gradients
     param.backward(gradients)

network_optimizer.step()

Given my gradient calculation is correct and both the gradients tensor and prams tensor are the same shape. Does the above correctly assign the gradients to the parameters and do backpropogation w.r.t the parameters?

Any insight would be greatly appreciated!

Hi @Aaron_Thomas,

If you’re manually calculating gradients, why you don’t you define your own gradients within a torch.autograd.Function object? Then you don’t have to manually place the gradients on the .grad of your network’s parameters, autograd will do that for you?

Hi @AlphaBetaGamma96 thank you for your response, im relatively new to pytorch, how could I write the autograd function to handle the gradients in this usecase?

Hi @Aaron_Thomas,

There’s an example tutorial in the docs here: PyTorch: Defining New autograd Functions — PyTorch Tutorials 2.3.0+cu121 documentation

1 Like