Getting error 'float' object has no attribute 'backward'

Hello,

I have written the following loss function but it is failing with “‘float’ object has no attribute ‘backward’” during training. I looked into earlier post Extending PyTorch — PyTorch master documentation and [Solved] What is the correct way to implement custom loss function? - #10 by Tofigh_Naghibi but could not find out what is the issue in my loss function. Can anyone please help on this? Thanks

class IntersectionOverUnion(nn.Module):

"""

    Implementation of the Soft-Dice Loss function.

    Arguments:

        num_classes (int): number of classes.

        eps (float): value of the floating point epsilon.

"""

def __init__(self, num_classes, eps=1e-5):

    super().__init__()

    # init class fields

    self.num_classes = num_classes

    self.eps = eps

# define the forward pass

def forward(self, preds, targets):  # pylint: disable=unused-argument

    """

        Compute Soft-Dice Loss.

        Arguments:

            preds (torch.FloatTensor):

                tensor of predicted labels. The shape of the tensor is (B, num_classes, H, W).

            targets (torch.LongTensor):

                tensor of ground-truth labels. The shape of the tensor is (B, 1, H, W).

        Returns:

            mean_loss (float32): mean loss by class  value.

    """

    loss = 0

    # iterate over all classes

    for cls in range(self.num_classes):

        # get ground truth for the current class

        target = (targets == cls).float()

        # get prediction for the current class

        pred = preds[:, cls]

        # calculate intersection

        intersection = (pred * target).sum()

        # compute dice coefficient

        # iou = (2 * intersection + self.eps) / (pred.sum() + target.sum() + self.eps)

        iou = (intersection + self.eps) / (intersection + 1)

        # compute negative logarithm from the obtained dice coefficient

        loss = loss - iou.log()

        # get mean loss by class value

    loss = loss / self.num_classes

  

    # print("Value:", loss.item()) 

    # loss_k = torch.tensor(loss.item(), dtype= torch.double)

    loss_k = loss.item()

    return loss_k
1 Like

loss.item() return the value of loss (float) which doesn’t have backward attribute.
you code returns float value rather than loss tensor.

Thanks…It worked…

How did you fix this?

.backward() is a tensor method, so make sure you are calling it on the right object and not a Python float:

x = torch.tensor([1.], requires_grad=True)
x.backward() # works

y = x.item() # y is now a float
y.backward() # fails
# AttributeError: 'float' object has no attribute 'backward'
4 Likes

thank you …it worked

Hi, What does line mean please?
Thanks & Best Regards
AMJS

It sets the requires_grad attribute of a tensor to True and allows Autograd to track all operations using this tensor to create a computation graph in the forward pass which can then be used to compute gradients in the backward pass.

2 Likes