[solved] Custom function: what is the logic for requires_grad property on output

Say I write a custom function derived from torch.autograd.function.

Where is the code that determines the requires_grad property of the Variable that wraps the tensor which is returned from that function?

Is it in python_function.cpp/_wrap_outputs ?

What is that logic in words?

I have the problem that the returned Variable has requires_grad=False, but I want the gradients to propagate through that function.

The Variables that I pass as inputs both have requires_grad=True and I don’t understand why the output has requires_grad=False.

Thanks alot!

Solved, I wasn’t passing the Variable, but Variable.data, hence no requires_grad=True on the inputs.