Torchvision suggestion, refactoring weight initialization: common method ._initialize_weights()

suggestion for refactoring the canonical weight initialization methods for all networks.

Use case, working in AutoML, Ive been I reinitializing networks as I dont care about the learned weights, just the hyperparameters im working on learning; reloading model and passing into cuda is very slow, so I simply reinitalize them which seems to be much faster;

I noticed in the models that ship with torchvision, that googlenet and mnasnet have an ._initialize_weights() method; alexnet and shufflenetv2 utilize the nn.Conv2d or whichever module initialization, and the other have initializations that are inlined in the class .__init__() method.
Refactoring it into a standard method ._initialize_weights() would be a tad cleaner than it is now.
I supposed it could go into nn.Module, but since every net has small bespoke differences, maybe just like you have them in googlenet makes more sense.

or I can simply output a cloned/detached state dict and use that instead.
Never mind, I suppose - still I would have a single init function