Conv2d with weights sharing in one-dimension of input?

How can i implement Conv2d with weights sharing in one-dimension of input?
i.e., given an input image I(x,y), i want to perform convolution such that input I[:,y] uses the same convolution filter weights and I[x,:] uses different weights.

Would a kernel size of [kh, 1] work? Or do you want a two-dimensional kernel with shared weights along the width?

x = torch.randn(1, 3, 24, 24)
conv = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=(3, 1))
output = conv(x)