Propagate custom initial gradient through network?

To my knowledge, the autograd.backward() function is used to determine the gradient of the loss with respect to the output of the network, which ultimately gets propagated back through the network via chain rule.

Is it possible to manually set the initial gradient (gradient of loss w.r.t. output), and use the backward() function to propagate this artificial gradient back through the network to the inputs?

If so, how might I go about doing this?

This post asked a similar question, but the answer claims that there is some way to set a gradient param in the backward() function. However, I do not see this in the documentation.


you can do:


@smth Thanks for the quick reply!

I am encountering an issue when I attempt to run the line that you suggested. I assume that when you are referring to model, you are referring to a β€˜Net’ object. I am trying to run the following code segment, in which I am attempting to:

  1. Load a pre-trained MNIST model
  2. Execute a forward pass with a single image (batch_size=1) to get activations
  3. Create a dummy gradient (custom_grad = 0.5)
  4. Backpropagate the dummy gradient through the network
  5. Access the artificial gradient w.r.t. the input
# Load model for testing
model = Net()
SoftmaxWithXent = nn.CrossEntropyLoss()
model = torch.load('./mnist_saved_model.pth')

# Construct the testing dataset
test_dataset = MNIST_Dataset(mnist_test_data, mnist_test_labels)

for img,lbl in, shuffle=True):
    # Create the data and label variables so we can use them in the computation
    img = Variable(torch.FloatTensor(img), requires_grad=True)
    lbl = Variable(torch.LongTensor(lbl))
    # Normalize RGB [0,255] to [0,1]
    img = torch.div(img, 255.0)
    # Call a forward pass on the data
    output = model(img)
    custom_grad = torch.FloatTensor(np.asarray([0.5]))
    print("", img.grad)

When I run this, I get the following error:
AttributeError: 'Net' object has no attribute 'backward'

** As a side note, when I get the gradient w.r.t. a loss function (as usual), and attempt to extract the gradient w.r.t. the input image, I either get None or gradient values that are nearly zero (on order of 10^-30). Why might this be?

I am new to PyTorch so please excuse my ineptitude. Thanks again!

I apologize, it should be output.backward().

What you want to do is:

1 Like