MNIST Incremental Learning

Here is an example of directly manipulating the layer.
Note that you shouldn’t use the .data attribute, but instead wrap the manipulation in a with torch.no_grad() block.

Alternatively, here is an example of the nn.ParameterList usage:

class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        self.params = nn.ParameterList()
        self.params.append(nn.Parameter(torch.randn(2, 10)))
        
    def forward(self, x):
        w = torch.cat(tuple(self.params.parameters()), 0)
        x = F.linear(x, w)
        return x
    

model = MyModel()
x = torch.randn(1, 10)
out = model(x)
print(out.shape)

# Add param
new_param = nn.Parameter(torch.randn(1, 10))
model.params.append(new_param)

out = model(x)
print(out.shape)

In that case you could use optimizer.add_param_group with your new parameter.