Slicing input Variables and backpropagation

Hello all,
I have a pretrained network with a 28x28 input(MNIST) image and 10 outputs. I want to get the gradient of one of those outputs wrt the input.(how the image should change to increase the score of one digit).

If I input a single image with a variable that has requires_grad = True, I can do output[digit].backward(retain_variables=True) to do this. So, slicing the output of a network does not affect the backward prop.

However, if I input a batch of images, slice the input by x=input[k,:,:,:] and do output[k,digit].backward(retain_variables=True), x.grad remains empty(it is still None). Could you explain me why this happens and is there a way to do this without getting the grad from the whole input?

Thank you in advance!

2 Likes

I would slice the input and then wrap it in Variable, because autograd will only return gradients to the user for leaf nodes.

1 Like

Yes, but in this case I have to pass the new Variable from the network once more, right?

1 Like

I think I had the same confusion as the OP. Basically, I thought the slicing operation didn’t compute gradients, but x.grad (in the OP’s example) was None because intermediate variables don’t store gradients and not because slicing doesn’t compute gradients.

Hi, is this problem solved in v0.2?