I am new in PyTorch and developing a deeponet code. I need to create a loss function where autograd is applied on each sample of model output. I used for loop, but the code is very slow. Is there any way to vectorize the for loop to get the grad for entire samples in one go. I am attaching the loss function code
def derivative(dy: torch.Tensor, x: torch.Tensor, order: int = 1) → torch.Tensor:
“”"
This function calculates the derivative of the model at x_f
“”"
for i in range(order):
dy = torch.autograd.grad(
dy, x, grad_outputs = torch.ones_like(dy), create_graph=True, retain_graph=True
)[0]
return dy
def loss_phy_mod(model: nn.Module, y_data, x_data):
yy=model(y_data,x_data)
jacobian_rows = [derivative(yy[i,:],x_data,1)
for i in range(y_data.shape[0])]
jacobian = torch.stack(jacobian_rows)loss= torch.mean((y_data.flatten() - jacobian.flatten())**2) return loss