Layer weight vs. weight.data

Manipulating the weights directly would most likely give you an error when you try to call backward:

lin = nn.Linear(10, 2)
lin.weight[0][0] = 1.
x = torch.randn(1, 10)
output = lin(x)
output.mean().backward()
> RuntimeError: leaf variable has been moved into the graph interior

Using .data on the other side would work, but is generally not recommended, as changing it after the model was used would yield weird results and autograd cannot throw an error.
I would recommend to use with torch.no_grad():

lin = nn.Linear(10, 2)
with torch.no_grad():
    lin.weight[0][0] = 1.

x = torch.randn(1, 10)
output = lin(x)
output.mean().backward()
9 Likes