Feature dimension in Convolution

If we use RGB and set 32 as out_channels, feature of R is extracted with 12 filters?
nn.Conv2d(in_channels=RGB, out_channels=32, kernel_size=3, padding=1),

No, each if the 32 filters in the conv layer will use all input channels to create a single output activation map. These outputs will then be stacked to the 32 output channels.

Thanks for your explanation.

nn.Conv2d(in_channels=RGB, out=32),
and
nn.Conv2d(in_channels=R, out=32),
nn.Conv2d(in_channels=G, out=32),
nn.Conv2d(in_channels=B, out=32),

Calculation time between upper and lower is same, but, the result is sometimes very different?

Sorry, I don’t fully understand the use case. If you are re-initializing new conv layers, the results would always differ unless you are properly seeding the code, setting the filter weights and bias manually to defined values, or are loading a state_dict. If you want to split the filters to work on each input channel separately, you could use the groups argument.

For instance, among RGB R does not have features, GB have some features. In this case, when RGB is input, convoluted by a filter-A, a generated new feature map-A after the convolution might have worse features compared with a new feature map-A that is generated/added with each feature map convoluted via individual input R, G, and B. just thinking…

you could use the groups argument.

I did not know it.
I will check it, thnak you for the information.