Hello,
since I am using layers inside an array, they don’t get recognized by network.parameters() for the optimizer.
To fix that I created a Parameterlist with the weights like this:
layer_size=[44,32,16,2]
self.layers=[torch.nn.Linear(layer_size[i] , layer_size[i+1]).to(device) for i in range(len(layer_size)-1)]
self.myparameters = nn.ParameterList([nn.Parameter(p.weight) for p in self.layers])
Is this so correct that optim.Adam(self.network.parameters()) works right?
Thanks, that works!
Would the ParameterList create the same result (for example when I have a dropout, in an array i can easily disable it for test/evaluation, whereas i don’t see an easy solution for that in Sequential)?
I made some tests between using a ParameterList and Sequencial and it seems to get the same results you either need to use weight and bias in the parameterlist in the same order or
self.layers=torch.nn.ModuleList([torch.nn.Linear(layer_size[i] , layer_size[i+1]).to(device) for i in range(len(layer_size)-1)])
which seems easier.