I have a simple network and the grad of the final FC layer

How can I calculate the rest grad

For example, I have a simple network like:

conv→conv→relu→linear

I only have the gradient of the linear layer.

I don’t have access to the loss or something like smashed data in split learning.

How can I calculate the gradient of the two conv layers or where should I use backward ?

Assuming you have the output gradient of your linear layer, something like this would work:

```
out = model(input) # linear output
out.backward(gradient=output_gradient)
```

So does it mean I still need both the output and the gradient of one layer to backward