Autograd of if else branching returns None instead of 0.0

I am new to pytorch and I wanted to understand a bit how torch.autograd.grad works. For this I wrote a small code but I don’t understand why it is not working.

import torch
from torch.autograd import grad

import torch
from torch.autograd import grad
x = torch.tensor(-5.0, requires_grad=True)

def my_relu(z):
    if z > 0.0:
        return z
    else:
        z = torch.tensor(0.0, required_grad=True)
        return z

y = my_relu(x)
h = grad(y, x, retain_graph=True)

I get

One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.

but if I set this parameter allow_unused=True then I get h=None instead of h=0.0 as the output. What is going on here?

You are never using the z input to my_relu in the else branch since you are creating a new leaf variable:

else:
    z = torch.tensor(0.0, required_grad=True)

Here, z is a new tensor without any knowledge about the input z and is also not attached to it in any way.

Oh I see. And is it possible to compute the gradient with respect to this new leaf?

Yes, but it would be 1. since you are computing h = grad(y, y, retain_graph=True) as no operation was done on z (or y which is assigned to z).