Hi! I’m trying to use autograd.grad inside a jit decorated function but am having problems with mismatching type hints that I’m unable to resolve. My problem is similar to Error when using TorchScript with autograd · Issue #46483 · pytorch/pytorch · GitHub but I also need to use the grad_outputs argument which is not covered there.
Running the following ( with pytorch 1.9.0 )
@torch.script.jit
def gradient(y, x):
# grad_outputs = [torch.ones_like(y)]
grad_outputs = torch.jit.annotate(Optional[Tensor], torch.ones_like(y))
# grad_outputs = torch.jit.annotate(Optional[List[Optional[Tensor]]], [torch.ones_like(y)]) -> "Expected a List type hint but instead got Optional[List[Optional[Tensor]]]"
grad = torch.autograd.grad(
[y], [x], [grad_outputs], create_graph=True, retain_graph=True
)[0]
return grad
yields the error
RuntimeError:
aten::grad(Tensor[] outputs, Tensor[] inputs, Tensor?[]? grad_outputs=None, bool? retain_graph=None, bool create_graph=False, bool allow_unused=False) -> (Tensor?[]):
Expected a value of type 'Optional[List[Optional[Tensor]]]' for argument 'grad_outputs' but instead found type 'List[Tensor]'.
Empty lists default to List[Tensor]. Add a variable annotation to the assignment to create an empty list of another type (torch.jit.annotate(List[T, []]) where T is the type of elements in the list for Python 2)
Any help would be greatly appreciated!
Cheers
Raphael