Using nn.ParameterList

Hello,
since I am using layers inside an array, they don’t get recognized by network.parameters() for the optimizer.
To fix that I created a Parameterlist with the weights like this:

layer_size=[44,32,16,2]
self.layers=[torch.nn.Linear(layer_size[i] , layer_size[i+1]).to(device) for i in range(len(layer_size)-1)]
self.myparameters =  nn.ParameterList([nn.Parameter(p.weight) for p in self.layers])

Is this so correct that optim.Adam(self.network.parameters()) works right?

I think it will be easier if you use torch.nn.Sequential container.

You can pass in your defined layer list to it and there will be no problem.

Thanks, that works!
Would the ParameterList create the same result (for example when I have a dropout, in an array i can easily disable it for test/evaluation, whereas i don’t see an easy solution for that in Sequential)?

Actually you can disable dropout/batchnorm easily in this as well

model = nn.Sequential(*args)
model.eval()

model.eval() puts the model into evaluation mode which disables dropout and use population statistics for batchnorm

I made some tests between using a ParameterList and Sequencial and it seems to get the same results you either need to use weight and bias in the parameterlist in the same order or

self.layers=torch.nn.ModuleList([torch.nn.Linear(layer_size[i] , layer_size[i+1]).to(device) for i in range(len(layer_size)-1)])
which seems easier.