Hi,
I would like to speed up my code to calculate the Jacobian matrix for a neural network outcome w.r.t. the network parameters. So far, my code looks as follows:
def autograd(input, params):
O = torch.autograd.grad(input, params, torch.ones_like(input), allow_unused=True, retain_graph=True, create_graph=False)
return O
def _compute_centered_jacobian(model,samples):
"""Computes O=d Psi/d Theta. """
# get NN parameters
l = tuple(filter(lambda p: p.requires_grad, model.parameters()))
parameters = l
# calculate log_probs and phases: I want to calculate the jacobian of both!
log_probs, phases = model.log_probabilities(samples)
jac_ampl = [torch.cat([j_.flatten() for j_ in autograd(0.5*log_probs[i], parameters) if j_!=None]) for i in range(log_probs.size(0))]
jac_phase = [torch.cat([j_.flatten() for j_ in autograd(phases[i], parameters) if j_!= None]) for i in range(phases.size(0))]
jac_ampl = torch.stack(jac_ampl)
jac_phase = torch.stack(jac_phase)
return jac_ampl, jac_phase, log_probs, phases
This works, but it is very slow. Furthermore, I can not use torch.autograd.functional.jacobian
because I dont have an explicit function model.log_probabilities(network weights)
. Now I found out that it is possible to speed up calculations using vmap
and tried something like this:
def autograd_vmap(inputs, params):
def autograd(input):
O = torch.autograd.grad(input, params, torch.ones_like(input), allow_unused=True, retain_graph=True, create_graph=False)
return O
out = vmap(autograd)(inputs)
return out
However, this will raise
O = torch.autograd.grad(input, params, torch.ones_like(input), allow_unused=True, retain_graph=True, create_graph=False)
File "...", line ..., in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I would be happy about any help!