Reset the model parameters but the model stop to train

Lets take the example from tutorial for example, suppose I train the model as following

import torch
import math
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)
p = torch.tensor([1, 2, 3])
xx = x.unsqueeze(-1).pow(p)

model = torch.nn.Sequential(
    torch.nn.Linear(3, 1),
    torch.nn.Flatten(0, 1)
)

loss_fn = torch.nn.MSELoss(reduction='sum')

optimizer = torch.optim.RMSprop(model.parameters(), lr=1e-3)
for t in range(2000):
    y_pred = model(xx)
    loss = loss_fn(y_pred, y)
    if t % 100 == 99:
        print(t, loss.item())
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
....
99 18966.865234375
199 7549.88232421875
299 2734.0283203125
399 1074.5540771484375
499 667.1275024414062
599 529.754638671875

Now if I reset the model parameters like below, and retrain the model, the model seems stop trainning itself.

def weight_init(m):
    if isinstance(m, nn.Linear):
        init.xavier_normal_(m.weight.data, gain=1.0)
        if m.bias is not None:
            init.zeros_(m.bias)
model.apply(weight_init)

optimizer = torch.optim.RMSprop(model.parameters(), lr=1e-3)

for t in range(2000):

    y_pred = model(xx)

    loss = loss_fn(y_pred, y)

    if t % 100 == 99:

        print(t, loss.item())

    optimizer.zero_grad()

    loss.backward()

...
99 46380.5
199 46380.5
299 46380.5
399 46380.5
499 46380.5
599 46380.5
699 46380.5
799 46380.5
899 46380.5
999 46380.5
1099 46380.5
...

Can anyone help me to explain these?

Just to double check, did you simply omit the optimizer.step() from the second code snippet when it is in fact in the actual code?