Loss stays constant during the training

Hello,

I am trying do to a simple model,but unfortunately the loss stays constant.
Here is a part of my code(which I think that contains the bug):

def transform(x):
    x = x.view(x.shape[0], 81, 9)
    x = torch.argmax(x, dim=2).double()
    x.requires_grad_(True)

    return x

def train():

#some other code

optim = torch.optim.SGD(self.parameters(), lr=0.1)
loss_fun = nn.MSELoss()
for e in range(epochs):
            optim.zero_grad()
            
            #passing the data through my only layer
            output = self(train_data)

            #converting the labels to double(I get an error if I delete this)
            train_labels = train_labels.double()
            train_labels.requires_grad_(True)
            
            #apply some changes on my output
            loss = loss_fun(transform(output),train_labels)
            
            loss.backward()
            
            optim.step()
            #append the loss to this array so I can plot it
            train_loss.append(loss.detach().numpy())

list(self.parameters()) returns some tensors with some random data(which I think is good)
If I iterate over my parameters and call .grad I get only None always,so I think that I am damaging in some way some other grads(maybe deleting the ones already calculated?).Thank you!