TypeError: ‘Conv2d’ object is not iterable

I want to use Xavier initialization scheme, for initializing weights, for Resnet in http://www.pabloruizruiz10.com/resources/CNNs/ResNet-PyTorch.html

So I added this in the network:

def xavier_init(ms):
       for m in ms :
            if isinstance(m, nn.Linear) or isinstance(m, nn.Conv2d):
                nn.init.xavier_uniform(m.weight,gain=nn.init.calculate_gain('relu'))
                m.bias.data.zero_()

and then calling this in the main code:

net.weight_init()

It is giving this error:

TypeError: ‘Conv2d’ object is not iterable

how can I solve this error?

1 Like

Hi I don’t know if this’ll help but I found that in order to do xavier_uniform one should do:

linear1=torch.nn.Linear(N_FEATURES, hiddenLayerSize, bias=True)
torch.nn.init.xavier_uniform(linear1.weight)

instead of:

nn.init.xavier_uniform(m.weight,gain=nn.init.calculate_gain(‘relu’))

It’s discussed here: How to initialize the conv layers with xavier weights initialization?

I’m not sure, how you are calling xavier_init, but if ms is a module, the for loop will throw this error.
Your current code (net.weight_init()) is not passing any arguments, and xavier_init is not defined as a class function.

Thank you for your reply.
I’m calling xavier_init as follows:

def xavier_init(ms):
    for m in ms :
        if isinstance(m, nn.Linear) or isinstance(m, nn.Conv2d):
            nn.init.xavier_uniform(m.weight,gain=nn.init.calculate_gain('relu'))
            m.bias.data.zero_()


class ResNet(nn.Module):
    def __init__(self, block, num_blocks, num_classes=10):
        super(ResNet, self).__init__()
        self.in_planes = 64
        
        
        self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False)
        self.bn1 = nn.BatchNorm2d(64)
        self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)
        self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2)
        self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)
        self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)
        self.linear = nn.Linear(512*block.expansion, num_classes)

      
    
    
    def _make_layer(self, block, planes, num_blocks, stride):
        strides = [stride] + [1]*(num_blocks-1)
        layers = []
        for stride in strides:
            layers.append(block(self.in_planes, planes, stride))
            self.in_planes = planes * block.expansion
        return nn.Sequential(*layers)


    def forward(self, x, lin=0, lout=5):
        out = x
        if lin < 1 and lout > -1:
            out = self.conv1(out)
            out = self.bn1(out)
            out = F.relu(out)
        if lin < 2 and lout > 0:
            out = self.layer1(out)
        if lin < 3 and lout > 1:
            out = self.layer2(out)
        if lin < 4 and lout > 2:
            out = self.layer3(out)
        if lin < 5 and lout > 3:
            out = self.layer4(out)
        if lout > 4:
            out = F.avg_pool2d(out, 4)
            out = out.view(out.size(0), -1)
            out = self.linear(out)  
        return out   
    
   
    def weight_init(self):
        for m in self._modules:
            xavier_init(self._modules[m])

and in the main code, I’m calling resnet and weight_init as follows:

net = models.__dict__[args.model](num_classes)
net.weight_init() 

What should I add as an argument for (net.weight_init())?

I would recommend to call model.apply with your weight init method and remove the loop inside xavier_init:

def xavier_init(ms):
    if isinstance(m, nn.Linear) or isinstance(m, nn.Conv2d):
        nn.init.xavier_uniform(m.weight,gain=nn.init.calculate_gain('relu'))
        m.bias.data.zero_()

model.apply(xavier_init)

Also, your instantiation of net looks as if you are importing the model directly from torchvision.models instead of your custom ResNet definition.

Actually, I am using argparse. for importing my custom Resnet. If I understood, model.apply or (I think in my case net.apply should be in the main code where I have net = models.__dict__[args.model](num_classes), while xavier_init function has to be in the model.py (where the model is, I mean in a separate .py file). My question is, I have to put the model in the main .py file for using model.apply or there is a way that I keep separate .py files for main and model and still using model.apply?

You could call self.apply from some internal method (e.g. __init__) calling self.xavier_init.

Thank you for your reply.

I used model.apply, it is giving this error:

File "...", line 308, in <module>
    net.apply(xavier_init)      
  File "...", line 293, in apply
    module.apply(fn)
  File "...", line 293, in apply
    module.apply(fn)
  File "...", line 294, in apply
    fn(self)
  File "...", line 191, in xavier_init
    m.bias.data.zero_()
AttributeError: 'NoneType' object has no attribute 'data'

Probably some layers do not have a bias parameters, so you should add a condition to the weight init function.

It is solved. Thank you for your help, @ptrblck.

@Niki Could you please show how you solved the problem? thanks