The design of torch.autograd.functional.jacobian

I was wondering why the first parameter of torch.autograd.functional.jacobian is not the output?
I have to define a dummy function that does nothing but return the output

import torch
x1=torch.tensor([1.0, 1.0], requires_grad=True)
x2=torch.tensor([2.0, 2.0], requires_grad=True)
y1=(x1+2*x2).sum()
y2=(3*x1+4*x2).sum()

def dummy_func(*args):
    #It is impossible to write the computation process/graph here
    #because it has already been done in some other functions/sections
    #y1=(x1+2*x2).sum()
    #y2=(3*x1+4*x2).sum()
    return (y1, y2)
gg=torch.autograd.functional.jacobian(dummy_func, (x1, x2), vectorize=True)

g=torch.autograd.grad(y1, [x1, x2]) 
# fail because "Trying to backward through the graph a second time,
# but the saved intermediate results have already been freed."

The problem of this hack is that torch.autograd.functional.jacobian will not retain the graph from (x1, x2) to (y1, y2)