How to calculate gradient of the upper part of the network

For example, I have a simple network like:
conv→conv→relu→linear
I only have the gradient of the linear layer.
I don’t have access to the loss or something like smashed data in split learning.
How can I calculate the gradient of the two conv layers or where should I use backward ?