The gradient backpropagation in feature residual learning

I have a pretrained network and two images. Two separate features are extracted from the middle layer of the pretrained network, named F_l and F_h. Afterward, I used two convolution layers to learn the residual between them, but the residual learning seems not working, is there something wrong with gradient backpropagation?

image

image

image

The call to hr_feature = torch.tensor(hr_feature) would prevent gradients from flowing through. Is that intended?

It is not intended, it is a suggestion from ChatGPT.