Resize Tensor With PIL and preserve Gradients

Is there any way to Convert tensor that has Gradients into PIL image and then resize Image and convert it to tensor again without lossing the Gradients ?

I don’t think this exact workflow would be possible as the transformation into a PILImage would break the computation graph and remove the gradients. PIL.Images are using numpy arrays internally, which do not contain gradients.
You could of course reshape the tensor directly, if this would fit your use case, but this would also break the gradients I think.
What is your exact use case?

First Thanks for the fast response.
My case is:
I’m training Variational Auto Encoder That Takes images with Size 64,128,256, etc

But instead of using pixel wise loss I’m using perceptual loss, that’s mean I take the original image and the reconstructed image and feed them respectively to a pre-trained model (ssd) and calculate the loss of it’s hidden layers values.

The problem is ssd take only 300x300 image size and I train the VAE with 128, so inorder to compute the loss via SSD I should resize the reconstructed image (which has the gradients).

Would it work to use the original images, feed them to the VAE, resize the output, and feed to the SSD?
It would be similar to this small example, which would still preserve the gradients in the smaller image, since the interpolation is differentiable:

x = torch.randn(3, 24, 24, requires_grad=True)
x_res = F.interpolate(x, size=30)

loss = x_res.mean()
loss.backward()
print(x.grad)
1 Like