I am trying to implement SE block with any pretrained resnet architecture.
self.encoder = torchvision.models.resnet34(pretrained=pretrained) self.conv1 = nn.Sequential( self.encoder.conv1, self.encoder.bn1, self.encoder.relu SeBlock(64,2), nn.MaxPool2d(2, 2) ) self.conv2 = nn.Sequential( self.encoder.layer1, SeBlock(64,2), ) self.conv3 = nn.Sequential( self.encoder.layer2, SeBlock(128,2), ) self.conv4 = nn.Sequential( self.encoder.layer3, SeBlock(256,2), ) self.conv5 = nn.Sequential( self.encoder.layer4, SeBlock(512,2), )
Above is my code but It is not working out. Looking into githubs they build resnet from scratch and induce SE block and then load model.state_dict() weights for the layers of resnet and train the remaining model.
I just need to know what is the correct procedure to use SE block with pretrained resnet.?