Backpropagation through predicted padding size values

Hi there,

Suppose I have a network predicting padding sizes, i.e, a tensor of shape (batch_size, 4). The number 4 comes from top, bottom, left and right padding sizes.

These predicted padding sizes are then used to pad an image, i.e, a tensor of shape (batch_size, c, h, w).
The padding value is constant and is zero. After padding, we have a tensor of shape (batch_size, 3, h+padding_size_h, w+padding_size_w). For clarification, padding_size_h is the sum of top and bottom values and padding_size_w is the sum of left and right.

I then use the padded image to calculate a pixel loss wrt to a target image of the same shape as the padded image.

Do either nn.ConstantPad2d or F.pad, which take int or tuples for padding sizes, break backpropagation? That is, using F.pad(image, padding_size_predictions.long(), ‘constant’, 0), will it backpropagate gradients?

Thanks!

Hi Lolo!

If I understand your use case correctly, you would like to train a network
to predict the padding sizes.

This won’t work because the padding sizes are inherently integers – they
are discrete variables that you can’t do calculus on. So you can’t use
backpropagation and gradient-descent-based algorithms to optimize them.

To answer your specific question, F.pad() does not support backpropagation
through its padding size (pad) argument. (It will backpropagate through its
input argument, that is, through the tensor being padded).

Best.

K. Frank

Hi Frank,

Thanks for answering, very helpful. That’s what I assumed, but I wanted to double check in case there was some sort of differentiable operation under the hood.

If anyone comes up with a workaround to solve this problem please share! It would be much appreciated!

Thanks!