How to initialize zero loss tensor

Hi,
I would like to initialize a zero loss tensor which will be used to accumulate losses dynamically based on some condition. Sometimes, there will not be any loss accumulation and in this case, when I backpropagate, it throws the following run time error

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Sample code looks like this:

import torch

# some inputs
x = torch.rand(10, 3)

# some targets
y = torch.rand(10, 1)

# some parameter
W = torch.nn.Parameter(torch.rand(3, 1))

# total loss
total_loss = torch.tensor([0.0])

# some condition
if torch.rand(1) > 0.5:
    loss = torch.sum((torch.matmul(x, W) - y) ** 2)
    total_loss += loss
total_loss.backward()

So, the question is how to initialize this zero loss tensor for handling this?

Hi,

I guess you have two options:
either you do the backward only when it happened:

total_loss = 0.
# You code
if torch.is_tensor(total_loss):
    total_loss.backward()

Or create it with requires grad all the time.

total_loss = torch.tensor([0.0], requires_grad=True)
# You code
total_loss.backward() # This will always work now.
1 Like

I tried your second solution, it throws this run time error.

total_loss += loss
RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.

Ho right you can’t use it inplace… The first one is definitely best.
Or use you original code and check that total_loss.requires_grad before calling backward on it.

1 Like

Thanks, I will go with this solution then.