RuntimeError(2): element 0 of tensors does not require grad and does not have a grad_fn

Kindly assist to find out why I am still this error message: “RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn”

My section of code I am running is:

model.zero_grad()
log_probs = model(images)
loss = self.criterion(log_probs, labels)

            loss.backward()
            optimizer.step()

log_probs might be detached from the computation graph. Check, if it’s .grad_fn attribute is pointing to a valid function or None (detached). In the latter case, print the .grad_fn attribute of the intermediate tensors in your model’s forward method to check which operation detaches the tensor from the computation graph. Often this is done by e.g. rewrapping a tensor via x = torch.tensor(x), using another library such as numpy, explicitly calling tensor.detach() etc.