I have a custom layer which has no parameters, yet processes the input in a complicated way.
In LuaTorch, I just need to write updateOutput
and updateGradInput
.
In Pytorch, I do the similar thing in myFunc
(inherited from torch.autograd.Function).
As long as I just need to obey It should return as many Variable s as there were inputs, with each of them containing the gradient w.r.t. its corresponding input.
, then it works well no matter how I process the input data?