More efficient implementation of Jacobian matrix computation

Hello,

Is it possible using PyTorch to implement jacobian matrix computation in more efficient way than going sequentially through each element of a residual vector and calling backward for each? I.e.:

residuals = model.forward()
n_residuals = residuals.data.shape[0]

params_offset = list(map(lambda param: np.prod(param.data.shape), model.parameters()))

jacobian = Variable(torch.zeros(n_residuals, int(np.sum(params_offset))))
params = [param for param in model.parameters()]

for i in range(n_residuals):
    # zeroing grads each time

    residuals[i].backward(retain_graph=True)

    offset = 0
    for param, param_offset in zip(model.parameters(), params_offset):
        jacobian[i,offset:offset+param_offset] = param.grad.data.view(-1)
        offset += param_offset
5 Likes

Hi!
Did you find a more efficient solution? I am facing the same issue here.