How does on initialize weights using a non-nn.init function?

Hello everyone.

Let’s say I wanted to use a torch function such as

torch.range(1, X, 0.5) to initialize my weights for all CNN layers, where X would need to be flexlibe depending on size of CNN filters. How can I do so?

With nn.init it was very easy; I would just find the method I want to use within init library and just feed in the layer weights as a parameter to the method.

For example:

def init_weights(m):
    
    if type(m) == nn.Conv2d:
        
        torch.nn.init.uniform_(m.weight) # initialize weights with uniform weights

It seems vanilla torch has a more extensive collection of methods to help me come up with a more customized initialization.
The above range method is just a contrived example, but it will help me understand how to create my own initialization method using vanilla Pytorch methods.

I’m sorry if this question has been asked already.

Use torch.no_grad()

with torch.no_grad():
    m.weight.DO_SOMETHING_()