You should not use .data anymore but use the with torch.no_grad(): context manager with the most recent versions of pytorch.
See how the nn.init module work for example here.
And yes to all your questions otherwise, it will work exactly that way.
Note that setting requires_grad = False will make it so that no gradients are computed (or kept at 0). This does not necerraly mean that the weights won’t be updated as for example, Adam will change the weights even for a gradient of 0 because of the momentum terms.
with torch.no_grad():
w = torch.Tensor(weights).reshape(self.weight.shape)
self.weight.copy_(w)
I have tried the code above, the weights are properly assigned to new values.
However, the weights just won’t update after loss.backward() if I manually assign them to new values. The weights become the fixed value that I assigned. (The weights are updated correctly if not manually assigned)
Could you please help me with this problem?
Your help is highly appreciated!