How to manually set output gradients?

I have a network that predicts a vector given an input. Is it possible to manually set gradients for the output and then call backward in order to propagate these gradients backward through the network?

Essentially, I have:

action_predicted = self.policy_net(inputs_action_pred)

where policy_net is a simple neural network as such:

class PolicyNet(nn.Module):
    def __init__(self):
        super(PolicyNet, self).__init__()
        self.fc1 = nn.Linear(2, 100)
        self.fc2 = nn.Linear(100, 100)
        self.fc3 = nn.Linear(100, 2)
        init.normal(self.fc1.weight, mean=0, std=0.5)
        init.normal(self.fc2.weight, mean=0, std=0.5)
        init.normal(self.fc3.weight, mean=0, std=0.5)

    def forward(self, x):
        x = F.tanh(self.fc1(x))
        x = F.tanh(self.fc2(x))
        x = F.tanh(self.fc3(x))
        return x

I want to access:

action_predicted.grad

You could call backward on your output Variable with a gradient parameter. See the backward() documentation.

This would percolate the gradient all the way down the computation graph though. Is there any way to modify a single gradient in-place? In my case, I’m computing a gradient in a remote process and sending it back to the parent. Will the parent’s parameter tensor have a valid .grad attribute? If so, perhaps just a copy_ might work.

2 Likes