Torch.autograd.functional.hvp does not work on a model in the evaluation mode

import torch
from torch.autograd.functional import hvp
from torchvision import models

model = models.resnet18()
model.eval() # model.train()

x = torch.randn((1, 3, 224, 224))
hvp(lambda _x: model(_x).max(), x, x, strict=True)

The above code returns the following message:

RuntimeError: The hessian of the user-provided function is independent of entry 0 in the grad_jacobian. This is not allowed in strict mode as it prevents from using the double backward trick to replace forward mode AD.

If I turn on the training mode, i.e., change to “model.train()” from model.eval(), the function hvp can work.
However, I suspect that this is not good for a model including “Dropout” or “BatchNorm”.

Can anyone help me in resolving this issue?

Development environment;

  • Ubuntu 20.04
  • Python 3.9.5
  • torch 1.10.0
  • torchvision 0.11.1
model.eval()
# list(model.children())[1] is the first BatchNorm2D layer
list(model.children())[1].training = True 

I could run hvp by the above modification.
However, it looks still to be an issue and strange.