Why init weight function disappear after put net to cuda

code is here

    elif args.model == 'efficientnetb4':
        net = smp.Unet('efficientnet-b4', classes=4, activation=None)
        print('111:', dir(net))
        print('load smp efficientnet-b4')

    #print('net:', net)

    log_dir = os.path.join(args.out_dir, 'log')
    if not os.path.exists(log_dir):
    writer = SummaryWriter(log_dir)
    writer.add_graph(net, torch.rand(1, n_channels, args.imgh, args.imgw))

    if args.cuda:
        net = torch.nn.DataParallel(net)
        net = net.cuda()

    print('222:', dir(net))
class Model(nn.Module):

    def __init__(self):

    def initialize(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_uniform_(m.weight, mode='fan_in', nonlinearity='relu')
                if m.bias is not None:
                    nn.init.constant_(m.bias, 0)
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)

The initialize is defined in model class, and Unet is inherited from EncoderDecoder, and EncoderDecoder is inherited from model class. My question is why the initialize function is disappear after put net to cuda multi-gpu?

Because DataParallel wraps Model. It doesn’t modify the object itself.
You can call model.module.initialize.
Namely, inside a DataParallel object your model is instantiated at object.module

Thanks, nice answer, got it