(1) If, i write “output = torch.square(input)” in customize activation forward() in below code, i think it doesn’t require backward() implementation as it used torch function
(2) But if i write “output = input * input”, it requires the implementation of backward(). Is below code is fine for customize function “output = input * input”?
Let me know if i am missing something.
class my_act(Function):
# Note that both forward and backward are @staticmethods
@staticmethod
def forward(ctx, input, weight, bias=None):
ctx.save_for_backward(input)
output = input * input
return output
@staticmethod
def backward(ctx, grad_output):
input = ctx.saved_tensors
grad_input = grad_output * ( 2 * x) # DERIVATIVE OF X*X MULTIPLY WITH GRAD_OUTPUT
return grad_input
I just want to implement Kernel Density Estimator (KDE) as there is no torch function for that and if i use scikit kde, i need to detach tensor which don’t allow me to gradient flow. So for that i just clearing my doubt with demo program i shown.
I am very thankful if you have some idea regarding KDE. Is there any in built pytorch function OR i need to implement custom function for KDE.
If it required to implement custom function for KDE, have you some idea or link if it is already implemented?
There is no KDE implementation in core that I know of. You might want to google around as someone else might already have implemented that though in a third party lib.
Otherwise, for this, you indeed need a custom Function.
So you will need to make sure you have the backward formula written down for the the KDE evaluation.
And in that case, yes you sample above is correct.