If you’re just looking for a function to init an inputted linear layer with custom weight and bias initialization methods, you can do so like this with a function:
I use torch::NoGradGuard noGrad to initialize the layer’s weights and biases, the C++ equivalent to Pytorch’s with torch.no_grad():, as according to this documentation: Autograd mechanics — PyTorch 1.11.0 documentation, The implementations in torch.nn.init also rely on no-grad mode when initializing the parameters as to avoid autograd tracking when updating the initialized parameters in-place. You can remove that line if you know there’s no need for it.