Use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor)

Here is the froward() method of a loss class that calculates loss based on the elements of the confusion matrix. c_loss is float and is being converted to a tensor. This line results in the following warning:

UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). loss = torch.tensor(c_loss, requires_grad=True)

The warning makes no sense as c_loss is not a tensor. What am I doing wrong here?

    def forward(self, inputs: torch.Tensor, targets: torch.Tensor):
        _, predictions = torch.max(inputs, 1)
        c = confusion_matrix(targets.cpu().numpy(), predictions.cpu().numpy())

        c_loss = c[0, 1] / (c[0, 1] + c[1, 1])
        # this line results in the warning
        loss = torch.tensor(c_loss, requires_grad=True)

        return loss

What does type(c_loss) show?

it is just float. That’s why the message does not make sense to me.

A float wouldn’t raise the warning so did you print the type as asked?