How to calculate the gradient of the previous layer when the gradient of the latter layer is given?

If you already have gradients for a layer, you can pass them into .backward() as a parameter…

Example here

.backward in the docs

Note you can actually pass the inputs into backward too. I’ve not done this before, but I assume it would compute the the forward pass, then compute the backwards pass w.r.t. your calculated gradients.

something like… (below code is just an illustration)

net = nn.Sequential([layer1, layer2])  # <= pseudocode
gradient_tensor=torch.zeros_like(layer2.grad)
net.backward(inputs=[input_tensor], gradient=gradient_tensor)
print(net.layer1.grad)