I create a NN with one hidden layer, so I have two tensor of weights.

Then I train it, so I expect the weights to update. The problem is that only the second set of weights updates, the first stays the same!

Why? I have tried different possible solutions, but nothing.

Thank you for your help.

This is the code (I removed the imports and the dataset upload), I print “par” before training and after training, it is clear that the first set of weights did not change:

```
class NN1(nn.Module):
def __init__(self, D_1, D_2, H, D_out):
super().__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(D_1*D_2, H, bias=False),
nn.ReLU(),
nn.Linear(H, D_out, bias=False),
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = NN1(28, 28, 756, 10)
par = list(model.parameters())
print(par)
def train_loop(model, training_data, batch_size, eta, epochs):
optimizer = torch.optim.SGD(par, lr=eta)
loss = nn.CrossEntropyLoss()
train_dataloader = DataLoader(training_data, batch_size=batch_size, shuffle=True)
for epoch in range(epochs):
for batch_idx, (X, y) in enumerate(train_dataloader):
ypred = model(X)
train_loss = loss(ypred, y)
if batch_idx % (6400/batch_size) == 0:
print(f'Epoch [{epoch + 1}/{epochs}] Batch [{batch_idx}/{len(train_dataloader)}] Training loss: {train_loss.item()}')
optimizer.zero_grad()
train_loss.backward()
optimizer.step()
return model
print(par)
```