Your method may work with some networks (though it should handle 1d parameters too) but it is more usual to delegate initialization to submodules for more precise control, as other nn.init initializers all may have their use, depending on distribution of submodule inputs (or preceding “activation function”).
Should have mentioned the obvious: torch.nn built-in modules initialize their own parameters (with initializers hardcoded in reset_parameters()), so your code is only useful if you’re not happy with default initializers, or create parameters manually, but then again - you would probably want to fine-tune them locally.