Customized ReLu(at a point other than zero) error

I am getting an error from this code, but if I use 0 instead of 0.5, it works. I wondered how can it be solved.

  File "/home/pycy/anaconda3/envs/inpainting_CTSDG/lib/python3.6/site-packages/torch/autograd/", line 132, in backward
    allow_unreachable=True)  # allow_unreachable flag

class MyReLU(torch.autograd.Function):

    def forward(ctx, inp):
        out = torch.zeros_like(inp).cuda()
        out[inp > 0.5] = 1.0
        return out

    def backward(ctx, grad_output):
        inp, = ctx.saved_tensors
        grad_input = grad_output.clone()
        grad_input[inp == 0.5] = 0
        return grad_input

Could you post a minimal, executable code snippet which would reproduce this issue, please?

I wish that I could, but I don’t know how to write a code that can replicate it. I found out that my gradients goes to the roof, and that’s probably the issue. Because when I put 0.9 it works. Anyways, can you check this and see if it does what I intended to do?
I want the activatation layer, pass 1 for values over 0.5 and pass zero for values under 0.5, but in the backward I want it sees a normal ReLu. I know it doens’t make sense, but I need to have values that are either 1 or 0 at the end, and I stell have backpropagation on a ReLu.

The “normal” ReLU backward would be defined as:

    def backward(ctx, grad_output):
        input, = ctx.saved_tensors
        grad_input = grad_output.clone()
        grad_input[input < 0] = 0
        return grad_input

so you might want to change it.

Just a quick comment, could it be an issue that you save inp for backward and then return the output on a different device? So, you have inp on I assume your cpu and out on cuda? And, when computing your loss, grad_output will be on the device of your loss which I assume is cuda and you mask its value by a Tensor which is on the cpu? Surely this would cause some issue?

Yes, but I don’t want the ReLu exactly, I want the activation function have two behaviour. In the forward, the output would be 1 and 0, but in the backward it sees ReLu backward. So I have a point of discountinoutiy that I want to give its gradiant value in case I had a problem in the backward. Could you please explain to me how can I modify ReLu to a Chimera of Step(forward),ReLu(backward)?

Thanks, but no it works fine in other values.