Outputs = func(*inputs) TypeError: ‘Tensor’ object is not callable

Hello
I aimed to calculate the jacobian of a tensor (n by m) with respect a tensor (m by d) so i tried this code:
torch.autograd.functional.jacobian(output,W)
where output is the output of my network and gives me the following error
outputs = func(*inputs)
TypeError: ‘Tensor’ object is not callable

would you give me some advice on how can I handle that?

torch.autograd.functional.jacobian takes a function as its first argument. Have a look at the provided examples so see some dummy use cases.

1 Like

I tried them but I think ‘vjp’ is what i am search about that not ‘jacobian’ !! I am not sure

Hi,

Are you looking for the full Jacobian or the product between a vector you have and the Jacobian?

1 Like

Hello, I have a neural network which has a fully connected (FC) layer with relu activation function before that (i.e. the input of this FC layer comes from a relu activation) if we called this input as ‘x’ and this layer as ‘h’, I like to compute the Jacobean of h with respect to its input x. So, we expect a 2D matrix with partial derivative of dh_j/dx_i. I used ‘javobian’ it gave me a 4D matrix (x and h are a 2D matrices). On the other hand ‘vjp’ gave me a 2D matrix… I’m confused which one is true!!

The jacobian will be 2D only if your input and output are 1D.
If you have nD input and mD output, then the Jacobian will be (m+n)D.

If you say that x and h are 2D, what do you mean by dh_j/dx_i? You need 2 indices to index both h and x here.

1 Like

That was an example of the partial derivative of 1D vector h with respect to 1D vector x, I supposed ‘h’ and ‘x’ are both 1D, but in my problem they are 2D. So shall I use ‘jacobian’? there is a problem with it https://pytorch.org/docs/stable/autograd.html?highlight=jacobian#torch.autograd.functional.jacobian
how can I pass x and h to this function? remember that I want compute the jacobian of h which is a FC layer with respect to its input which is the output of ReLU activation function.
whould you please give me a piece of advice.
thanks in advance

In this case you will need to give the linear layer as the function and x as the input.

It will create the Jacobian which will be 4D because input and output are 2D.

1 Like

ِDo you mean I need to create a function like: (W: weight matrix between input x and h layer, b: bias term)

def func(x):
    fc_out=torch.mm(x,W)+b
    return fc_out
jacobian(func,x)

shall I set “create_graph=True”?

You can do that.
But if you already have a module, you can do:

mod = nn.Linear(10, 10)

jacobian(func, x) # To get the jacobian of the output wrt x

You should set create_graph=True if you want to backprop through that operation.

1 Like

thank you :slight_smile: