Custum loss and custom Module, Weights won't update

Hi, I’m having a problem with my code. I want implement the loss function myself (in the code I present you this loss is greatly simplified). But I don’t understand the parameters and the loss are not update while the gradient is not null.

class LOSS(nn.Module):

    def __init__(self,x):
        super(LOSS, self).__init__()
        self.para = nn.Parameter(torch.tensor(x, requires_grad=True))
    def forward(self):
        for time in range(1, len(self.para[:])):
                F += 1 / 2 * m*(self.para[time]-self.para[time-1])**2
        return F


model = LOSS(x)

optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9)


I’m sorry Chloe, but there are a magnitude of weird things here. Your loss function should be separate from your model. It most often only consist of a function (not using nn.Parameter - thats the model) and returns a scalar.

Dont do zero_grad before your backward and step. Do the zero grad before sending inputs to the model (in your case model = LOSS(x))

Perhaps start with a tutorial/prebuilt project to get the basics down and modify a single part of it :slight_smile: Even brushing up on your Python seems like it would help

1 Like

Thanks I’ll take a better look at the tutorail

1 Like