Training a model with a pre trained network

So I made model to do Super Resolution over MNIST dataset, I’ve already trained a classifier to whether or not it can recognize the number from the output of my SR model.
I use the results of my classifier to comput the loss of my SR model and the question I have is this one :
During the training loop

SRmodel.train()
classifier.eval()
ouptut = SRmodel(data)
predictedLabel = classifier(output)
loss = MyLoss(data, output, predictedLabel, label)
loss.backward()

How do I know whether the backward propagation is occurring only over my SR model, or over my classifier, or both? (So far the output of my SR model are correct, but I might be missing something)

You could check the gradients in both models separately before and after the backward call via:

for param in SRmodel.parameters():
    print(param.grad)
for param in classifier.parameters():
    print(param.grad)

These should be None before the first backward call, zeros after zeroing out the gradients via optimizer.zero_grad(), and contain valid values after the backward call.

thank you for your awnser, yes the gradients contain valid values after the backward call