# How can I call backward functions?!

Hi,

I want to call some backward functions like:
MulBackward
FftWithSizeBackward
etc…

Where are they and how can I call them with my input?

Thanx

Would you help me? @ptrblck @alband It depends on what you used the loss function.

It does not depends on loss function, I just want to call the function AddBackward for example. This call is individual and has nothing to do with neural network! @fmassa @jekbradbury

I mean, for example, whatever you used the loss function like nn.CrossEntropyLoss, it can be NllLossBackward …

what you want is just call the backward function manually, check this documentation. (https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html).

As I said please forget about loss! For each operation we have a forward function which we can call as easy as calling for ex: torch.add(x,y). And we have backward function which I dont know how to call manually for example something like torch. AddBackward(ctx,x,y)! I know that autograd call them one by one somehow but I dont know how to call it my self!

Hi,

Can you please avoid pinging everyone? It creates a lot of noise for us and we do read all the posts.

It depends on which functions you want? There is no common api to access the backward functions without doing the forward.

I want to call these for example:
MulBackward
FftWithSizeBackward

I don’t mind calling them in c++ or python. I just want to see the definition and call them later on.

For the forward function doing `o = x + y`, the backward is `gx = go` and `gy = go`.
For the forward function doing `o = x * y`, the backward is `gx = y * go` and `gy = x * go`.
For the fft, it depends on which forward function you use. If you use a regular `o = fft(x)`, I think the gradient is just `gx = ifft(go)`. If you want to be sure, you can find here how the top level fft functions are linked to the low level one called `_fft_with_size`. Here you can find how the backward of that function is defined for each of it’s arguments. And the `fft_backward()` can be found here. Unfortunately thiss function is not accessible directly as the arguments are quite far from the regular python api. But you can easily find the corresponding `torch.[]fft` call that correspond to your forward.

1 Like

Thnx ``````Tensor var_std_mean_backward(const variable_list& grads, const Tensor & self, const Tensor & r1, const Tensor & r2, bool unbiased, bool is_std) {
}
}
}
``````

And where are these grad_fn coming from

?

``````a=torch.rand(2,3).requires_grad_()
b=torch.rand(2,3)
f=z.sum()
f.backward()
`grad` and `grad` are the gradient wrt the 0th and 1st output of the forward function. What I called `go` for my add and backward examples above.