I have a network that predicts a vector given an input. Is it possible to manually set gradients for the output and then call backward in order to propagate these gradients backward through the network?
Essentially, I have:
action_predicted = self.policy_net(inputs_action_pred)
where policy_net is a simple neural network as such:
class PolicyNet(nn.Module):
def __init__(self):
super(PolicyNet, self).__init__()
self.fc1 = nn.Linear(2, 100)
self.fc2 = nn.Linear(100, 100)
self.fc3 = nn.Linear(100, 2)
init.normal(self.fc1.weight, mean=0, std=0.5)
init.normal(self.fc2.weight, mean=0, std=0.5)
init.normal(self.fc3.weight, mean=0, std=0.5)
def forward(self, x):
x = F.tanh(self.fc1(x))
x = F.tanh(self.fc2(x))
x = F.tanh(self.fc3(x))
return x
I want to access:
action_predicted.grad