Access weights of a specific module in nn.Sequential()

Yes, because they are logits now.
If you want to see the “probability” for debugging purposes, apply torch.sigmoid on the outputs, but don’t feed the result to your criterion.

Many thanks for your answer.

you mean for computing loss function (both training or validation loss) I pass logits output directly:

        outputs=[]
        outputs = model(images2)
        loss = criterion(outputs,labels.float())

But if I want to get probability from outputs (for example I want to try some trained ANN on the new test set or getting probability from validation ) I should say:

output=torch.sigmoid(output)
predicted=[]
predicted = (output.data >= 0.5)

yes?

Yes, that’s basically correct.
A few minor issues:

  • you don’t need to initialize outputs and predicted as a list before the assign statement
  • the usage of .data can be dangerous with many side effects, so use the tensor directly via outputs >= 0.5

Thanks for your great comments.

Sorry i become a bit obsessive, my input data is 3D patch 11117, by using DataLoeader the size inverted to 71111 , I wrote this code to invert it to the 11117 and use it again as a tensor to feed the ANN:

for jj1, (images, labels) in enumerate(trainloader,0):

        images1=torch.zeros(32,11,11,7)
        for ff in range (images.shape[0]):
                    Tensor1=images[ff,:,:,:].numpy()
                    images1[ff,:,:,:]= torch.from_numpy(Tensor1.transpose((1,2,0)))
        images2=images1.view(-1,11*11*7)

images2=images2.cuda()
labels=labels.cuda()

optimizer.zero_grad()
outputs = model(images2)
loss = criterion(outputs,labels.float())
loss.backward()
optimizer.step()
outputs1=torch.sigmoid(outputs)
predicted = (outputs1 >= 0.5)

Another problem :frowning:

when I use nn.BCEWithLogitsLoss()

I get this error ;

loss = criterion(outputs,labels)

File “/apps/pytorch/1.2.0-py36-cuda90/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 547, in call
result = self.forward(*input, **kwargs)
File “/apps/pytorch/1.2.0-py36-cuda90/lib/python3.6/site-packages/torch/nn/modules/loss.py”, line 498, in forward
return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
File “/apps/pytorch/1.2.0-py36-cuda90/lib/python3.6/site-packages/torch/nn/functional.py”, line 2051, in binary_cross_entropy
input, target, weight, reduction_enum)
RuntimeError: reduce failed to synchronize: device-side assert triggered

Why this happen :frowning:

Try using nn.ModuleDict, where you can save layers in a dictionary. The modules it contains will be visible by all modules as opposed to using only a python dictionary

1 Like

Hi
I have one question regarding model weights saving format.

I want to save my model weights in same format as mentioned in all pre-trained Resnet models (resnet151,resnet50, etc.,) like layer wise.

Here I am attaching my resnet model code:

class ResnetModel(nn.Module):
def init(self):
super(ResnetModel, self).init()
self.layer1=nn.Sequential(nn.Conv2d(1, 32, kernel_size=3, stride=1, padding=1),nn.BatchNorm2d(32),nn.LeakyReLU(0.03))
self.layer2 = nn.Sequential(ResNetBlock(32, 32, True))
self.layer3 = nn.Sequential(ResNetBlock(32, 32, False),nn.MaxPool2d(3, stride=3, padding=1))
self.layer4 = nn.Sequential(ResNetBlock(32, 32, False),nn.MaxPool2d(3, stride=3, padding=1))
self.layer5 = nn.Sequential(ResNetBlock(32, 32, False),nn.MaxPool2d(3, stride=3, padding=1))
self.layer6 = nn.Sequential(ResNetBlock(32, 32, False),nn.MaxPool2d(3, stride=3, padding=1))
self.layer7 = nn.Sequential(ResNetBlock(32, 32, False),nn.BatchNorm2d(32),nn.LeakyReLU(0.03),nn.MaxPool2d(3, stride=3, padding=1))
self.lrelu = nn.LeakyReLU(0.03)
self.dropout = nn.Dropout(0.5)
self.fc1 = nn.Linear(32, 128)
self.fc2 = nn.Linear(128, 2)

def forward(self, x,y=None):
    
    batch_size = x.size(0)
    
    x = x.unsqueeze(dim=1)
   
    out = self.layer1(x)
    
    out = self.layer2(out)
    
    out = self.layer3(out)
   
    out = self.layer4(out)
    
    out = self.layer5(out)
    
    out = self.layer6(out)
    
    out = self.layer7(out)

    out = out.view(batch_size, -1)
    out = self.dropout(out)
    out = self.fc1(out)
    out = self.lrelu(out)
    out = self.fc2(out)
    return out

This code saving weights in “module.layer.0.weights” format as mentioned below:

module.layer1.0.bias
tensor([-0.1583, 0.1678, -0.1230, -0.1435, -0.0654, 0.2234, -0.0085, 0.2544])

module.layer1.1.weight
tensor([0.9833, 0.9923, 0.9996, 0.9952, 0.9940, 1.0030, 0.9963, 0.9980, 0.9984])

But I want to save my model weights layer-wise sequentially in below mention format

(‘layer1’, Sequential(
(0): Bottleneck(
(conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

)
)

Save weights in this format because I want to plot specific layer-wise sequentially weights and inside layer also specific block-wise weights. For example: I want to plot specific (0):Bottleneck weights in ‘layer1’.

Could you please let me know what changes I have to do in my resnet model code so I will be able to save the weights in above mentioned format.

Any suggestions is helpful for me.

Thanks in advance

I’m not sure, if you need a custom format to plot the parameters in the desired way.
Could you have a look at the state_dict of your model and see, if the keys would allow you to perform the plotting?

model = models.resnet18()

sd = model.state_dict()
print(sd.keys())

I can not seem to find a way to initialize weights for different dense blocks.
I am using for name,layer in model.named_modules() for accessing layers.I initialize weights of ‘feature.denseblock2’ by a custom weight and rest by Xavier initialization.
the issue is the layers in each dense block goes to Xavier initialization as the it loops through named_modules.
return_node.keys() = ‘feature.denseblock2’

def init_weights(self):
        for name, layer in self.model.named_modules():
            if name in return_nodes.keys():
                for layer in layer.modules():
                    if isinstance(layer, nn.Conv2d):
                        nn.init.constant_(layer.weight,1/3)
                        if layer.bias is not None:
                            nn.init.constant_(layer.bias, 0)
                    elif isinstance(layer, nn.BatchNorm2d):
                        nn.init.constant_(layer.weight, 1)
        
              
            else:
                 #   print(name)
                    if isinstance(layer, nn.Conv2d):
                        nn.init.xavier_normal_(layer.weight)
                  #      print(layer)
                        if layer.bias is not None:
                            nn.init.constant_(layer.bias, 0)
                    elif isinstance(layer, nn.BatchNorm2d):
                        nn.init.constant_(layer.weight, 1)