I have got a question about initialization of 2D group convolutions. I have been trying to initialize custom ResNet encoder having group convolutions with pretrained weights. However, these pretrained weights belong to normal convolution layers, not to group ones. Obviously, there is dimension mismatch between group convolution weights and pretrained weights. I guess there is trick squeezing group weights into single tensor, mentioned in (Fan out calculation broken for group (depthwise) convolution · Issue #23854 · pytorch/pytorch · GitHub), In my case, I need to access each group kernel to initialize separately. In short, I need to convert pretrained normal convolution layers to group convolution layers with the same weights. How am I supposed to do this kind of initialization?