I am getting an error from this code, but if I use 0 instead of 0.5, it works. I wondered how can it be solved.
File "/home/pycy/anaconda3/envs/inpainting_CTSDG/lib/python3.6/site-packages/torch/autograd/__init__.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
I wish that I could, but I don’t know how to write a code that can replicate it. I found out that my gradients goes to the roof, and that’s probably the issue. Because when I put 0.9 it works. Anyways, can you check this and see if it does what I intended to do?
I want the activatation layer, pass 1 for values over 0.5 and pass zero for values under 0.5, but in the backward I want it sees a normal ReLu. I know it doens’t make sense, but I need to have values that are either 1 or 0 at the end, and I stell have backpropagation on a ReLu.
Just a quick comment, could it be an issue that you save inp for backward and then return the output on a different device? So, you have inp on I assume your cpu and out on cuda? And, when computing your loss, grad_output will be on the device of your loss which I assume is cuda and you mask its value by a Tensor which is on the cpu? Surely this would cause some issue?
Yes, but I don’t want the ReLu exactly, I want the activation function have two behaviour. In the forward, the output would be 1 and 0, but in the backward it sees ReLu backward. So I have a point of discountinoutiy that I want to give its gradiant value in case I had a problem in the backward. Could you please explain to me how can I modify ReLu to a Chimera of Step(forward),ReLu(backward)?