Getting: RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead

Hi I get this runtime error and I am not sure how to solve the problem. I have tried a few different things but it is not working. I am using PINN to solve a hyperbolic PDE Kvu

I have a function: def eps1(self, x):
return np.sin(x)
That i derive by: def eps1_grad(self, points):
p = points.clone()
p.requires_grad_(True)
eps1xi = self.eps1(p)
eps1_xi = autograd.grad(outputs=eps1xi, inputs=p,
grad_outputs=torch.ones_like(eps1xi), create_graph=True)[0]
return eps1_xi

Then I use the derived function in:
def loss_Kvu(self, points):
p = points.clone()
p.requires_grad_(True)
KvuKvv = self.forward(p)
Kvu = KvuKvv[:, [0]]
Kvv = KvuKvv[:, [1]]
Kvu_grad =
autograd.grad(outputs=Kvu, inputs=p, grad_outputs=torch.ones_like(Kvu),
create_graph=True)[0]
self.Kvu_x = Kvu_grad[:, [0]]
self.Kvu_xi = Kvu_grad[:, [1]]
x = points[:, [0]]
xi = points[:, [1]]
eps1_xi = self.eps1_grad(xi)
f1 = self.eps2(x) * self.Kvu_x - self.eps1(xi) * self.Kvu_xi - eps1_xi * Kvu -
self.c2(xi) * Kvv
loss_f1 = self.loss_function(f1, f1_hat)
return loss_f1

When eps1 = np.sin(x) I get the RuntimeError: Can’t call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead. However when eps1 = x**2, there is no issue.
I cannot understand when the problem arises, is it already in the object where the function is defined or is it when I am trying to derive the function? Any suggestions are appreciated.

Numpy is a different library to PyTorch and hence it can’t track operation outside PyTorch. The operation ** is just the exponent which PyTorch understands. If you want to track operations such that you can calculate a derivative you need to use an equivalent PyTorch operation (in your case replace np.sin(x) with torch.sin(x))

If you want to use a different library (say Numpy or SciPy) you need to remove the Tensor and convert it to Numpy (via .detach().numpy())