I noticed that some Module class wrap the Function
class from torch.nn.functional
again. For example, there is Dropout
in torch.nn.functional
which is imported from torch/nn/_functions/dropout.py
, and there is Dropout
in torch.nn
which is imported from torch/nn/modules/dropout.py
too.
I read the Neural Networks tutorial , it use only some functions from torch.nn.functional
in defining a Module, so I do not know what is the need of these modules.
trypag
(Pierre Antoine Ganaye)
May 12, 2017, 9:50am
2
This question was already answered a number of time.
Both torch.nn and functional have methods such as Conv2d, Max Pooling, ReLU etc. However, many public codes writes Conv and Linear layer in a class __init__ and call it with ReLU and Pooling in forward(). Is there a good reason for that ?
I am guessing that because Conv and Linear consist of learnable parameters which wrapped within functional module. And then define them in __init__ as members for the class. For ReLU, Pooling which do not require learnable parameters just to be called in forwa…
It seems that there are quite a few similar function in these two modules.
Take activation function (or loss function) as an example, for me the only difference is we need to instantiate the one in torch.nn but not for torch.nn.functional.
What I want to know is if there were any other further difference, say, the efficiency?
In PyTorch you define your Models as subclasses of torch.nn.Module.
In the __init__ function, you are supposed to initialize the layers you want to use. Unlike keras, Pytorch goes more low level and you have to specify the sizes of your network so that everything matches.
In the forward method, you specify the connections of your layers. This means that you will use the layers you already initialized, in order to re-use the same layer for each forward pass of data you make.
torch.nn.Functiona…
2 Likes