How to use torch.no_grad() if my loss function needs to compute the Jacobian matrix

Hi, I know that using with torch.no_grad() when validating the model can save lots of time and memory. However, since part of my loss function needs to compute the Jacobian matrix, which is computed using torch.autograd.grad. What should I do in this case? Do you have any suggestions?

There are some options, none of which are completely satisfactory:

  • Compute a metric and not the loss during validation.
  • If memory is limited, you may have to reduce the batch size during validation, and not use the torch.no_grad() context manager, so you can compute the loss.