Hello,
I am currently working on HoughNet (https://arxiv.org/abs/2007.02355) paper which has its code publicly available. I tried to change ConvTranspose2d
in (https://github.com/nerminsamet/houghnet/blob/master/src/lib/models/networks/houghnet_large_hourglass.py#L280) with Conv2d. However, I get this error message:
RuntimeError: Given groups=1, weight of size [9, 1, 17, 17], expected input[2, 9, 128, 192] to have 1 channels, but got 9 channels instead
The original configuration is as follows:
Sequential(
(0): ConvTranspose2d(9, 1, kernel_size=(17, 17), stride=(1, 1), padding=(8, 8), bias=False)
)
input_size = torch.Size([2, 9, 1, 128, 192])
weight_size = torch.Size([9, 1, 17, 17])
The error occurs when the configuration is as follows:
Sequential(
(0): Conv2d(9, 1, kernel_size=(17, 17), stride=(1, 1), padding=(8, 8), bias=False)
)
input_size = torch.Size([2, 9, 1, 128, 192])
weight_size = torch.Size([9, 1, 17, 17])
As far as I understand, Conv2d should be applicable on this input. I tried replicate this on a small trial code. There I have to have a 4D input and I wasn’t able to replicate this error. Whenever, I have 4D inputs however, I can replace my conv layers with transposed conv and vice versa. Any idea why this might occur?
Thanks.