Best way to backprop() with respect to the input variable

Hello everyone, I come to you in time of great need,

I need only the input for my model to be affected by backpropagation.

It is an experiment. I know it might sound silly, but I was wondering how to achieve this.

I tried setting x.requires_grad = True for my input variable and then layer.requires_grad = False for all the layers, but it seems that the computational graph isn’t created because of this.

Should I let requires_grad be true for everything and then manually set gradients to zero for all the layers?

How can I do that? Or is there a better way?

Thank you very much in advance for your help and patience. I’m a beginner.

Hi,

If you only want the gradient for the input, the simplest thing I can think of is:

model = # Your model
input = # Your input

# Make sure input requires grad
input.requires_grad_(True)

# Do the forward as usual
output = model(input)
loss = crit(output, label)

# Ask only for the grad wrt the input:
grad_input, = autograd.grad(loss, input)
1 Like

This solved it! Thanks :slight_smile: