Are torch.nn.Functional layers learnable?

When using torch.nn.Linear for example, the nn.Linear class looks after initialising and using the parameters that it requires.
When using the torch.nn.Functional.linear variant, it is up to you to provide the parameters on each forward pass.

Basically instead of

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.linear = nn.Linear(in_features, out_features)

    def forward(self, input):
        return self.linear(input)

You would do this

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.weight = nn.Parameter(torch.randn(out_features, in_features))
        self.bias = nn.Parameter(torch.randn(out_features))

    def forward(self, input):
        return F.linear(input, weight, bias)

In the first case Model knows that it has an nn.Linear submodule, in the second case Model knows that it has two parameter tensors.

So in the first case Model.parameters() will list the weight and bias parameters of the nn.Linear submodule, in the second case Model.parameters() will list the weight and bias parameters defined in init.

Training, saving and loading can all be done in exactly the same way in both cases.

8 Likes