Jacobian of function that takes a dict as input

I have written some code which allows the user to create vector-valued functions of the form f(x, y, theta), where theta is a dict containing parameters. The idea is that x and y will always be a part of the function but theta could be any number of different parameters depending on the function of interest. Note that I am not using these functions as part of a neural network module or anything like that. I am purely using the autograd functionality to differentiate functions.

For simplicity in this example, here is a scalar input-output function:

def f(x, y, theta):
    a, b = theta["a"], theta["b"]
    return x**4 + y**3 + a*b**2

Then I’d like to take the Jacobian as in

import torch
from torch.autograd.functional import jacobian

x = torch.tensor(5.)
y = torch.tensor(3.)
theta = {"a": 2, "b": 6}
jacobian(f, (x, y, theta))

This of course does not work since the jacobian expects a tuple of tensors as input. The error says that the function got a dict instead of a tensor or tuple of tensors. I tried to cast the dict to a tensor but this is not possible. I tried looking into Tensordict but this does not appear to be accepted by the jacobian function. I also tried passing kwargs into the function which get unpacked in the function body. This does not work because jacobian is expecting f() to take two positional arguments (just x and y).

In Jax it is possible to differentiate w.r.t. a dict input and I was originally able to obtain the correct answer in that framework. Is there any way I could do this in PyTorch?

Hi @espresso,

Have a look at the torch.func package, which is a pytorch version of JAX! It can handle the parameters as input via the use of torch.func.functional_call

Docs here: torch.func — PyTorch 2.0 documentation