Achieve the same result with torch.nn.Conv1d using torch.nn.Conv2d

I tried to achieve the same result with torch.nn.Conv1d using torch.nn.Conv2d for fun, I think conv1d is a subclass of conv2d, just make the input tensor height as 1 and kernel height as 1, but the test code tells me their results are not same:

import torch
input_for_conv2d = torch.randn(8, 6, 1, 100)
input_for_conv1d = input_for_conv2d.reshape(8, 6, 100)
conv2d = torch.nn.Conv2d(in_channels=6, out_channels=10, kernel_size=(1, 3), padding=(0, 1))
conv1d = torch.nn.Conv1d(in_channels=6, out_channels=10, kernel_size=3, padding=1)
# assign weights of conv2d to conv1d
conv1d.weight.data = conv2d.weight.data.reshape(10, 6, 3)
output_conv2d = conv2d(input_for_conv2d)
print(f"==>> output_conv2d.shape: {output_conv2d.shape}")
output_conv1d = conv1d(input_for_conv1d)
print(f"==>> output_conv1d.shape: {output_conv1d.shape}")
sum_conv2d = torch.sum(output_conv2d)
print(f"==>> sum_conv2d: {sum_conv2d}")
sum_conv1d = torch.sum(output_conv1d)
print(f"==>> sum_conv1d: {sum_conv1d}")
# The sum result are the the same

Can I achieve the same result with torch.nn.Conv1d using torch.nn.Conv2d ? If yes, how to do that ?

Set the bias to False or copy it too and it should work.

1 Like