Model.apply does not catch the module name

the main Idea that I want to apply parameter initialization to specific layers. the condition will be tested by the layer name.

experts recommend model.apply(init_fn)

this iterate through modules of the model by calling children() which will call named_chidren() method

named_children output name and module children however output only the module. at this level we lose the name information.

the condition to apply the initialization cannot not be tested

Help!

You could move the condition out of the weight_init method and apply it to a loop iteratint all named children manually. If the condition is met, you could them call into the weight_init, which could check for the type of the method or could even apply the torch.nn.init method directly on the child module.

thank you very much for your reply!
I suppose this suggest a direct call for the weight_init and not applying the model.apply(weight_init) technique:
something like this:

init_layers = ['conv12', 'conv2', 'conv31', 'conv4'] # layer that are allowed to be initialized
        for i in model.named_children():
            if i[0] in init_layers: # layer name
               weights_init(i)

otherwise, I don’t see how to apply your suggestion with the model.apply() technique

Yes, I would probably call the weights_init method on the desired layers directly to have a bit more control over it.