How to set the gradient to some value and do back-propagation?

so here I have a very simple model

def Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(...)
        self.conv2 = nn.Conv2d(...)
        ...
        self.output =nn.Conv2d(...)
        #some more code for relu and BN

    def forward(self, x):
        out = self.conv1(x)
        #some more code for relu and BN
        out = self.conv2(out)
        #some more code for relu and BN
        ...
        out  = self.output(out)

        return out

I know that I can pass in some random data like:

model = Net()

fake_input = torch.randn((1,1,32,32),requires_grad=True)
output = model(fake_input)

and if i want to see the gradient of the input i can use:

shape = torch.ones_like(output)
output.backward(shape)

#and the gradient for input data will be 
print(fake_input.grad.data)

so here I want to know, is it possible that we can fix the gradient of ouput layer to be some matrix M, and we back-propagate from that layer, then we study the gradient response of the input?

something like this:

#change the gradient of output layer to matrix_M
net.output.weight.grad = matrix_M

#do back-propagation, here the network gradient should already be changed to matrix_M
out.backward(shape)

#and study the gradient response of input
print(fake_input.grad.data)

I am not sure if I did this in correct way…can anyone give me some hints?

Hi,

Setting the .grad field won’t have any effect.

If you want to backward by setting the gradient for the output’s gradient to a given value M, you can simply do: out.backward(M)

I don’t know what you expect to be doing in your example. The gradient of net.output.weight do not influence the gradients in the rest of the network (contrary to the gradients of out).

Ok Thanks!

actually what i am trying to do is to see the effective receptive field…as mentioned by this paper

please correct me if my approach is not right. thanks again

I’m not familiar with that paper, but it does look like you want to change the activation (out) and not the weights.