Say that we have a simple model:

```
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(2,4)
self.fc2 = nn.Linear(4,4)
self.fc3 = nn.Linear(4,2)
def forward(self, x):
x = F.tanh(self.fc1(x))
x = F.tanh(self.fc2(x))
x = self.fc3(x)
x = torch.sigmoid(x)
return x
model = Net()
loss_func = nn.MSELoss()
optimizer = torch.optim.SGD(net.parameters(),lr=LEARNING_RATE,momentum=MOMENTUM,weight_decay=L2_REG)
```

and we do some operation (maybe a normalization of the weights of the last layer)

```
model.fc3.weight = nn.Parameter(model.fc3.weight/torch.max(model.fc3.weight))
```

Would this last operation be traced by the optimizer and when I call

```
optimizer.zero_grad()
loss.backward()
optimizer.step()
```

will the appropriate gradients and optimization step be taken?