How to mannually initialize weight

I’m trying to initialize weight of a conv layer like below,

class some_model(nn.Module):

def __init__(self):

    super(some_model, self).__init__()

    self.blur=nn.Conv2d(1,1,kernel_size=3,padding=1,bias=False)

    self.blur.weight[0][0][0][0]=0.0751
    self.blur.weight[0][0][0][1]=0.1238
    ...
def forward(self, x):
    xblur=self.blur(x)
    return xblur

But I got the following error message:
ValueError: can’t optimize a non-leaf Tensor

What is the correct way to initialize weight mannually?

I should instead do:
self.blur.weight.data[0][0][0][0]=0.0751

weird flex but okay…

Hi,

Using .data i discouraged, you should do:

with torch.no_grad():
    self.blur.weight[0][0][0][0]=0.0751
    self.blur.weight[0][0][0][1]=0.1238

what does torch.no_grad() do here?

I know normally we would put evaluation calls inside it, since gradients are not needed. But why do we need it when initializing weights?

Since everything that happens inside this block is not recorded. The inplace operation that change the value of your weights is not recorded either. And so it is a good way to initialize the value.

I guess my question then is what does ‘being recorded’ or ‘not being recorded’ do?

That is not such an easy question.
In some sense, ignore these operations when computing gradients.