I had to detach my model’s output to calculate the loss value. This is because the loss function is not implemented on PyTorch and therefore it accepts no tensors but NumPy arrays instead.

the problem now is that the loss value is detached from the computational graph and therefore no backpropagation could be performed.

**ps**: the loss function is scipy.optimize.linear_sum_assignment

You could write a custom `autograd.Function`

and implement the `backward`

function in addition to the `forward`

so that the computation graph wouldn’t break.

This tutorial gives you an example.