Conv2D not working correctly in C++

I have a kernel of size [3,3] and an image of size [B, 3, H, W]

I want to apply the same kernel in parallel across those 3 channels. They do not share data across channels

This is my code

  auto module = torch::nn::Conv2d(
    torch::nn::Conv2dOptions(3, 3, {3, 3}).padding(1).bias(false));
  module->weight = kernel.unsqueeze(0) 
                     .unsqueeze(0)
                     .repeat({1, 3, 1, 1})
                     .to(in_image.device());

  return module(in_image);

The size of the output is [1, 1, H, W]

But the same code implemented in Python

    conv = torch.nn.Conv2d(in_channels=3, out_channels=3, groups=1, kernel_size= (3,3), padding=3//2, bias=False)
    conv.weights = x_kernel.unsqueeze(0).unsqueeze(0).repeat(1,3,1,1)
    out = conv(rgb_image)

It gives [1, 3, H, W]. What gives?

Your Python code is wrong, as you are creating a new .weights attribute while you would like to replace the .weight parameter.
Change it to:

x_kernel = torch.randn(3, 3)
rgb_image = torch.randn(1, 3, 24, 24)

conv = torch.nn.Conv2d(in_channels=3, out_channels=3, groups=1, kernel_size= (3,3), padding=3//2, bias=False)
conv.weight = nn.Parameter(x_kernel.unsqueeze(0).unsqueeze(0).repeat(1,3,1,1))
out = conv(rgb_image)
print(out.shape)
> torch.Size([1, 1, 24, 24])

and you will get the same output size as in libtorch.