Element 0 of tensors does not require grad?

Hey guys, i’m having a problem about the tensors not having gradients? Here is my code. Embeddings is the direct output of the pretrained resnet50, cls is the labels given by dataloader. So i’m trying to copy the value of embeddings and cls to update some no-grad values in tcl, and then use these no-grad values to calculate loss for embeddings. In my design, loss_group[i] contains a single-value tensor loss for embeddings[i], so it could do backward() to update embeddings.

# my code
tmp_embeddings = embeddings.data.cpu().numpy()
tmp_cls = cls.cpu().numpy()
tcl.updateCenters(tmp_embeddings, tmp_cls)

loss_group = []
for i in range(len(tmp_cls)):
    cur_cls = tmp_cls[i]
    cur_center = tcl.get_center(cur_cls)
    cur_center = cur_center.cuda()
    critirion = torch.nn.MSELoss()
    loss1 = critirion(embeddings[i], cur_center)

    diff_center = tcl.search_min_center(tmp_embeddings[i])
    diff_center = diff_center.cuda()
    loss2 = critirion(embeddings[i], diff_center)
    loss_group.append(loss1+margin-loss2)

    loss_group = torch.FloatTensor(loss_group)
    loss = F.relu(loss_group)
    loss = loss.mean()

loss.backward()
optimizer.step()

It comes with the error : RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn.
So i tried ‘print(loss_group[0].requires_grad)’ and it says True
Kinda loss here, any help? muchas gracias!

Could you try to use torch.stack instead of torch.FloatTensor to create your loss_group:

loss_group = torch.stack(loss_group)

If you are creating a new tensor e.g. by using FloatTensor, you will detach the input from the computation graph.

Oh, it works! Thanks !