Yes, your description is correct.
The conv layer will use a kernel size of 3 and will apply the convolution on the T
dimension creating 3 activations maps.
B, C, T = 3, 4, 10
x = torch.randn(B, C, T)
conv = nn.Conv1d(C, C, 3)
out = conv(x)
print(out.shape)
> torch.Size([3, 4, 8])