Hello everyone,
I’m a bit confused with how to properly extend torch.nn
.
In basically all code samples I’ve seen, custom modules are compositions of already existing modules. But after playing around with tensors and autograd, it is tempting (if one wants flexibility) to just define parameters in a custom module’s __init__
and then perform some calculation in the forward
function, similarly to how it is described here:
http://pytorch.org/docs/0.3.1/notes/extending.html?highlight=extending#extending-torch-nn
def forward(self, input):
# See the autograd section for explanation of what happens here.
return LinearFunction.apply(input, self.weight, self.bias)
Now – is it o.k. to drop LinearFunction
and perform the calculation directly like this:
def forward(self, input):
output = input.mm(self.weight.t())
if self.bias is not None:
output += self.bias.unsqueeze(0).expand_as(output)
return output
It seem to work, but I’ve never seen this practice in code examples, so my question basically is whether there’s something wrong with this approach.
Thanks for clarification.