Optimizer wont change x and loss does not change

Hi, thanks for help with rotation matrix, now when i got it, iam trying to rotate matrix and then find angle with Pytorch. This is my code - iam not sure if loss function is good but my angle_pred and loss does not change after 100 loops.

Do you have idea where problem is ?

import torch

x = torch.rand((256,2))
angle = torch.tensor(0.08)
rot = torch.tensor([[torch.cos(angle), -torch.sin(angle)], [torch.sin(angle), torch.cos(angle)]])

x_rot = torch.matmul(x, rot)

angle_pred = torch.nn.Parameter(torch.zeros(1), requires_grad=True)

optimizer = torch.optim.Adam([angle_pred], lr = 0.1)


for step in range(100):
    rot_back = torch.tensor([[torch.cos(-angle_pred), -torch.sin(-angle_pred)], [torch.sin(-angle_pred), torch.cos(-angle_pred)]], requires_grad=True)
    x_pred = torch.matmul(x_rot, rot_back)

    loss = ((x_pred-x)**2).mean()

    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    print(loss)

There used to be a warning about this, but torch.tensor is bad here as it’ll treat the inputs as numbers rather than connecting them through autograd (and maybe needing the requires_grad=True should have given you a hint :slight_smile: ). Edit: Let me emphasize, that it’s a common enough thing to get this wrong, don’t be discouraged – I know at least 20 people who think that you can never know enough about autograd. :wink:

This (and not because it was aesthetically pleasing, it isn’t) was the reason I used the nested stack in the other thread. If you plug that back in, it’ll work a lot better.

Best regards

Thomas

2 Likes

Again, thank you a lot! Respect !