so here I have a very simple model
def Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(...)
self.conv2 = nn.Conv2d(...)
...
self.output =nn.Conv2d(...)
#some more code for relu and BN
def forward(self, x):
out = self.conv1(x)
#some more code for relu and BN
out = self.conv2(out)
#some more code for relu and BN
...
out = self.output(out)
return out
I know that I can pass in some random data like:
model = Net()
fake_input = torch.randn((1,1,32,32),requires_grad=True)
output = model(fake_input)
and if i want to see the gradient of the input i can use:
shape = torch.ones_like(output)
output.backward(shape)
#and the gradient for input data will be
print(fake_input.grad.data)
so here I want to know, is it possible that we can fix the gradient of ouput layer to be some matrix M, and we back-propagate from that layer, then we study the gradient response of the input?
something like this:
#change the gradient of output layer to matrix_M
net.output.weight.grad = matrix_M
#do back-propagation, here the network gradient should already be changed to matrix_M
out.backward(shape)
#and study the gradient response of input
print(fake_input.grad.data)
I am not sure if I did this in correct way…can anyone give me some hints?