Defining Custom leaky_relu functions

Hi thanks for your response. the medium post was helpful. I designed a trainable parameter. .

I have question on the above leaky relu. have you tried running it on cuda? On CPU the forward and backward both work but when you shift device to cuda the code doesn’t access the backward function, it only access the forward pass. I have also created a new question in the hopes to get the answer here Custom Backward function using Function from torch.autograd fails on cuda but works on cpu

Please let me know if it works for you. It definitely doesn’t work for me. I even tried the make_dirty → ctx.mark_dirty(output). I get the following error ->RuntimeError: a leaf Variable that requires grad has been used in an in-place operation. then i also tried the ‘output.view_as(output)’ but i got ‘RuntimeError: Some elements marked as dirty during the forward method were not returned as output. The inputs that are modified inplace must all be outputs of the Function.’