Self-defined submodule parameters are not registered

class nonlinear(nn.Module):

def __init__(self):
	super(nonlinear, self).__init__()
    self.scale_container = []
    for ii in range(10):
        self.scale_container.append(nn.Parameter(torch.FloatTensor([1.0]).cuda(), requires_grad=True))

def forward(self, x):
	output = None
    for ii in range(10):
        scale = self.scale_container[ii].expand_as(x)
        if output is None:
            output = (F.tanh(scale * x) + 1.0) / 2.0
    	output += (F.tanh(scale * x) + 1.0) / 2.0
    return output

class network(nn.Module):

def __init__(self, bitW, bitA):
    super(network, self).__init__()
def forward(self, x):
    x = nonlinear()(x)
    return x

CNN = network()
Then CNN.parameters is None…So Why the parameters are not registered into the model…

Because the default way parameters are registered is by assignment as a property of a module, through nn.Module's __setattr__. If they’re in a list, __setattr__ never gets called. Instead you need to use an nn.ModuleList to hold your parameters. It also has an append method and is iterable, so you can just replace

self.scale_container = []


self.scale_container = nn.ModuleList()

and you should be good, at least for nonlinear. Also you can simply iterate over it, i.e.

for scale in self.scale_container:

rather than bothering with range and ii.

For network, if you’re using a module with parameters (i.e. nonlinear) you should assign it as a property in __init__, i.e. self.nonlinearity = nonlinear() or something, and then use self.nonlinearity in your forward. Otherwise again, the parameters don’t have any way to get registered.


Thx for your kindly reply, it really helps!!

I think you should use nn.ParameterList() rather than nn.ModuleList()