Backward implementation of a new activation function

I am experimenting with implementing a custom activation function. To start with, I tried to mimic the behavior of relu. So, I added the custom op my_relu in ATen\native\native_functions.yaml which dispatches relu.

- func: my_relu(Tensor self) -> Tensor
 use_c10_dispatcher: full
 variants: function, method
 dispatch:
   CPU: relu
   CUDA: relu
   MkldnnCPU: mkldnn_relu
   QuantizedCPU: quantized_relu

I built pytorch from source after this change and I could run the forward pass successfully.

>>> import torch
>>> x = torch.randn(5, device='cuda')
>>> x.requires_grad_()
tensor([ 1.8718,  0.7903, -0.6581,  1.3783, -1.1062], device='cuda:0',
       requires_grad=True)
>>> x
tensor([ 1.8718,  0.7903, -0.6581,  1.3783, -1.1062], device='cuda:0',
       requires_grad=True)
>>> y = x.relu()
>>> y
tensor([1.8718, 0.7903, 0.0000, 1.3783, 0.0000], device='cuda:0',
       grad_fn=<ReluBackward0>)
>>> z = x.my_relu()
>>> z
tensor([1.8718, 0.7903, 0.0000, 1.3783, 0.0000], device='cuda:0',
       grad_fn=<NotImplemented>)

As you can see, z (my_relu(x)) has grad_fn=<NotImplemented. Could some one help with where the backward function should be added. I tried to follow relu's implementation but couldn’t find the answer.

Thanks.

Hi,

You will need to specify it in tools/autograd/derivatives.yaml then recompile.
You can check the entry for relu there and the comment at the beginning of the file for more details.

1 Like