Hi,
I would like to initialize a zero loss tensor which will be used to accumulate losses dynamically based on some condition. Sometimes, there will not be any loss accumulation and in this case, when I backpropagate, it throws the following run time error
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Sample code looks like this:
import torch
# some inputs
x = torch.rand(10, 3)
# some targets
y = torch.rand(10, 1)
# some parameter
W = torch.nn.Parameter(torch.rand(3, 1))
# total loss
total_loss = torch.tensor([0.0])
# some condition
if torch.rand(1) > 0.5:
loss = torch.sum((torch.matmul(x, W) - y) ** 2)
total_loss += loss
total_loss.backward()
So, the question is how to initialize this zero loss tensor for handling this?
Ho right you can’t use it inplace… The first one is definitely best.
Or use you original code and check that total_loss.requires_grad before calling backward on it.