Why the requires_grad of autograd.Function is Ture

Hi, I have a question,

class MyFunc(torch.autograd.Function):
    @staticmethod
    def forward(ctx, input):
        # return input.detach()
        return torch.tensor([1.0], requires_grad=False)

    @staticmethod
    def backward(ctx, grad_input):
        return grad_input

input = torch.tensor([2.], requires_grad=True)
output = MyFunc.apply(input)

print(output.requires_grad)  # True

I don’t understand why output.requires_grad is True while the result of forward is False

I am new to pytorch and hope I don’t bother you.

Thanks

This is pretty tricky indeed, but this behavior is noted in the docs:

By default, all the output Tensors that are of differentiable type will be set to require gradient and have all autograd metadata set for them. If you don’t want them to require gradients, you can use the mark_non_differentiable method mentioned above. For output Tensors that are not of differentiable type (integer types for example), they won’t be marked as requiring gradients.

See: Extending PyTorch — PyTorch 2.1 documentation

Thank you very much for your answer, may I ask where to view its source code, I would like to know some details of it.

Check here for the entry point of Python custom functions once you call apply:

This function is where the grad_fn is set for the output. You can see that set_gradient_edge is called as long as the output is differentiable (i.e., its scalar type is floating or complex).