More efficient implementation of Jacobian matrix computation

(totoro) #1


Is it possible using PyTorch to implement jacobian matrix computation in more efficient way than going sequentially through each element of a residual vector and calling backward for each? I.e.:

residuals = model.forward()
n_residuals =[0]

params_offset = list(map(lambda param:, model.parameters()))

jacobian = Variable(torch.zeros(n_residuals, int(np.sum(params_offset))))
params = [param for param in model.parameters()]

for i in range(n_residuals):
    # zeroing grads each time


    offset = 0
    for param, param_offset in zip(model.parameters(), params_offset):
        jacobian[i,offset:offset+param_offset] =
        offset += param_offset


Did you find a more efficient solution? I am facing the same issue here.