Does torch.compile support gradient operation in the model?

I need to calculate the gradient of the output of network with respect to input in my model. When model is not compiled, the training is normal otherwise report an error when calculating the gradient

unsupported operand type(s) for *: ‘Tensor’ and ‘NoneType’
File “/home/yklei/practice/mlmm_energy/test/debug/mlmm/model/model_pl.py”, line 148, in
f = grad(
File “/home/yklei/practice/mlmm_energy/test/debug/mlmm/model/model_pl.py”, line 89, in forward
for key in g.ndata.keys():
File “/home/yklei/practice/mlmm_energy/test/debug/mlmm/model/model_pl.py”, line 220, in training_step
results = self(g_qmmm, cell = cell)
File “/home/yklei/practice/mlmm_energy/test/debug/mlmm_main_pl.py”, line 38, in
cli = MyLightningCLI(LitMLMM, Molecule_DataModule)#, subclass_mode_model=True)
TypeError: unsupported operand type(s) for *: ‘Tensor’ and ‘NoneType’

I write a simple code to check whether gradient operation is supported when model is compiled

import torch
def fn(x, y):
    x.requires_grad_()
    y.requires_grad_()
    a = torch.cos(x).cuda()
    b = torch.sin(y).cuda()
    fmm = torch.autograd.grad(
                    a + b,
                    [x,y],
                    grad_outputs=torch.ones_like(a+b),
                    create_graph=True,
                    retain_graph=True )
    return a + b, fmm
new_fn = torch.compile(fn, backend="inductor")
input_tensor = torch.randn(10000).to(device="cuda:0")
a,fmm = new_fn(input_tensor, input_tensor)
print(a)
print(fmm)

it actually reported an error

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

is there any way to solve this?