Inconsistent gradient shapes for Conv1d and Conv2d

Hello. This is my first post on the PyTorch forum so forgive me if there is not enough detail.

I am trying to use register_backward_hook to get the gradient from a 1d convolutional layer. I found that the gradient shape is not what I expected and it is inconsistent with Conv2d.

For Conv2d the shape of the gradient for input (1,1,28,28) is (1,1,28,28).
I expect that the shape for Conv1d for input of shape (1,1,28) should be (1,1,28).
Instead the shape of the gradient for Conv1d is (1, 2, 1, 28)

I have included code that shows a test case below.

def fun(module,grad_in,grad_out):
    print('grad_in', grad_in[0].shape)

net1d = nn.Sequential(nn.Conv1d(1,2,1,))
net2d = nn.Sequential(nn.Conv2d(1,2,1,))

x1d = torch.randn(1,1,28,requires_grad=True)
x2d = torch.randn(1,1,28,28,requires_grad=True)

net1d[0].register_backward_hook(fun)
net2d[0].register_backward_hook(fun)

print('Conv1d Gradient Shape')
l = net1d(x1d)
l.backward(torch.ones_like(l))
print('Conv2d Gradient Shape')
l = net2d(x2d)
l.backward(torch.ones_like(l))

Output:

Conv1d Gradient Shape
grad_in torch.Size([1, 2, 1, 28])
Conv2d Gradient Shape
grad_in torch.Size([1, 1, 28, 28])

Could someone please help me to understand the shape of the gradient for Conv1d? Am I misunderstanding something or does this output seem incorrect?

Hi,

As mentionned in the doc, the backward hooks on nn.Module are not working properly at the moment :confused:
Hopefully we’ll manage to fix them soon.

You should ignore these results. If you want proper hooking, you should use hooks on Tensors directly as l.register_hook(you_fn) for example.

Hello,

Thanks for the quick response. Do you have an estimate on when the backward hooks should be fixed by?

It has been on the roadmap for a very long time. But it was blocked by required autograd improvements.
I am currently making these improvements so the hook will be fixable afterwards.

2 Likes