Jacobian error?

import torch
x = torch.tensor([1., 2., 3., 4.], requires_grad=True)

def y_v(x):
    return x

ja = torch.autograd.functional.jacobian(y_v, x, create_graph=True)
s = ja.sum()

# Traceback (most recent call last):
# in <module>
#    s.backward()
# File anaconda3/envs/pytorch/lib/python3.9/site-packages/torch/autograd/__init__.py", line 147, in backward
#    Variable._execution_engine.run_backward(
# RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

The error only occurs for a linear function.
For example, the function succeeds

def def y_v(x):
    return x.exp()

The Jacobian of the function f(x) = x is always the identity (constant wrt x), so it is expected that the gradient does not flow back to x.