Why the input and output grad got via register_backward_hook are the same?

def hook(m, input_, output_):
    input_=input_[0]
    output_=output_[0]

    print(m)
    print(id(input_))
    print(input_.shape)
    print(id(output_))
    print(output_.shape)

net.module.features[19].register_backward_hook(hook)

In one iteration, the printed content is as below:

Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
140513351157512
torch.Size([1, 512, 32, 32])
140513351157512
torch.Size([1, 512, 32, 32])

It seems that the gradient of input tensor of this conv2d layer has been changed to the gradient of output tensor of this conv2d layer.

Something wrong must have happened.

Who can tell me how to get the real gradient of input tensor?