How to use a numpy function as the loss function without getting errors during run time?

For my task, I do not need to compute gradients. I am simply replacing nn.L1Loss with a numpy function (corrcoef) in my loss evaluation but I get the following error:

RuntimeError: Can’t call numpy() on Variable that requires grad. Use var.detach().numpy() instead.

I couldn’t figure out how exactly I should detach the graph (I tried torch.Tensor.detach(np.corrcoef(x, y)) but I still get the same error. I eventually wrapped everything using with torch.no_grad as follow:

with torch.no_grad():
    predFeats = self.forward(x)
    targetFeats = self.forward(target)
    loss = torch.from_numpy(np.corrcoef(predFeats.cpu().numpy().astype(np.float32), targetFeats.cpu().numpy().astype(np.float32))[1][1])

But this time I get the following error:

TypeError: expected np.ndarray (got numpy.float64)

I wonder, what am I doing wrong?

To my understand:

np.corrcoef(x,y)

will return a ndarray type value.

torch.from_numpy(ndarray) → Tensor

This function requires a ndarray type

Can you print out the value of these codes below:

np.corrcoef(predFeats.cpu().numpy().astype(np.float32), targetFeats.cpu().numpy().astype(np.float32))[1][1])

to check out where is the problem.
Because your error shows that this function returns a numpy.float64 rather than a numpy array

1 Like

@Xiaoyu_Song You are right. I got the answer here

1 Like

Thanks for the link, but isn’t this breaking the graph? I ask you because I read this Calculating loss with numpy function.