Accessing weights and biases without detaching them

How can I access weights/biases without detaching them? I access them through state_dict, but it seems to be detached from the graph, getting the RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn. Here is a minimal example code to illustrate what I mean:

import torch
import torch.nn as nn
import torch.optim as optim

X = torch.randn(100, 1)
Y = 2 * X + 1 + 0.1 * torch.randn(100, 1)

model = nn.Linear(1, 1)

optimizer = optim.Adam(model.parameters(), lr=0.1) 
num_epochs = 100

for epoch in range(num_epochs):
    optimizer.zero_grad()
    
    loss = model.state_dict()['weight']
    loss.backward()
    optimizer.step()
      
    if (epoch + 1) % 10 == 0:
        print(f"Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}")
1 Like

I have figure it out. Instead of model.state_dict, I can use simply model.weight or model.bias.