You are using staticmethods so would have to pass the variable to the forward and/or backward method. If you need to register a parameters/buffer etc. create a custom nn.Module, register the data there, and call the custom autograd.Function in the module’s forward.
Yes, the tutorial explains how to write a custom nn.Module which you could use as a template to implement yours.
Yes, assuming variable is an nn.Parameter. If it’s a buffer (i.e. a non-trainable tensor which should be used in the forward and stored in the state_dict) use self.register_buffer.
One last question. In here(Extending PyTorch — PyTorch 1.11.0 documentation), there is no backward function in nn.Module. i want to use custom backward function, as i originally did on autograd.function in my first post on this thread.
So, can i write def backward in nn.Module and call backward in autograd.function ?
Or i don’t have to do write def backward, because the nn.Module automatically build ‘backward’ from my own-written autograd.function(which is custom backward as my original post on this thread)?
No need to write a custom backward function in nn.Module since you’ve already defined the custom backward in the autograd.Function. PyTorch is smart enough to call this backward method if you use your autograd.Function via .apply in nn.Module.forward.
autograd.Functions are stateless and cannot register buffers.
If you want to register self.val as a buffer or parameter, do so in the nn.Module (i.e. Module_custom_tanh).
In your custom_tanh pass all needed tensors to the forward and use ctx.save_for_backward if you need specific tensors in the backward function.