Get all zero answer while calculating jacobian in PyTorch using build-in function-jacobian

Sorry to post again, don’t like to use an auxiliary list to do the autograd, will torch support in-place operation in the future?

I think it’s best a dev answer this question (@ptrblck , apologizes for the tag!)

I am curious that all AD must “push back” the item, and cannot access it via indexing. In my work, I need an if statement to determine which index I need. Is there really no solution?

The problem in the original code is not really the inplace but the fact that you define f as a “leaf Tensor that requires grad” even though you don’t need to.
You can simply change f to not require gradients anymore and remove the torch.no_grad.

That will give you the result you want :slight_smile:

For reference, the updated code:

import torch
from torch.autograd.functional import jacobian
from torch import tensor

def get_f (x):      
    f=torch.arange(0,3, requires_grad=False, dtype=torch.float64)
    for i in looparray:
        f[i] = x[i]**2    
    return f
    
looparray=torch.arange(0,3)
x=torch.arange(0,3, requires_grad=True, dtype=torch.float64)
J = jacobian(get_f, x).detach().numpy()
3 Likes

Thank you, a following quick question. For example, define f requires grad. is df/d something or d something/ df? I remember it is the latter.

Because I will need df/d something for other cases

f requiring grad would be if you want d Loss / df which doesn’t make sense in your example

2 Likes

and d loss / df can be done by hand right? since normally loss = (f-fo)^2

Yes, d loss / df is just 2 * (f - f0) which can be calculated via the chain rule (assuming that f and f0 are independent of each other, of course)