Using layers of Pretrained model and Concatenate some additional layers

Hello everyone I’m a newbie on PyTorch, previously I create small models not very complex, so now I’m trying to build and train bigger models, so I start with a pose estimation, this models use as in the literature indicate the first 10 layers of VGG19 I already figure it out how to do this by reading this forum but still I’m not sure if I’m doing this correctly or building the next layers on the right way:
Step1:
I took the vgg19 model and use the layers of interest

model_vgg19=models.vgg19(pretrained=True)
for param in model_vgg19.features.parameters(): param.requires_grad=False
my_vgga19=model_vgg19.features[:23]

Step2:
Here is where
I build the next part of the model:

class RMPPE(nn.Module):
    def __init___(self):
        super(RMPPE,self).__init__()
        self.ConvLayerStep2=nn.Sequential(
        nn.Conv2d(512,256,kernel_size=3,stride=1,padding=1),
        nn.ReLU(inplace=True),
        nn.Conv2d(256,128,kernel_size=3,stride=1,padding=1),
        nn.ReLU(inplace=True))
        
        self.ConvLayerStep3a=nn.Sequential(
        nn.Conv2d(128,128,kernel_size=3,stride=1,padding=1),
        nn.ReLU(inplace=True),
        nn.Conv2d(128,128,kernel_size=3,stride=1,padding=1),
        nn.ReLU(inplace=True),
        nn.Conv2d(128,128,kernel_size=3,stride=1,padding=1),
        nn.ReLU(inplace=True),
        nn.Conv2d(128,512,kernel_size=1,stride=1,padding=0),
        nn.ReLU(inplace=True),
        nn.Conv2d(512,38,kernel_size=1,stride=1,padding=0))
        
        self.ConvLayerStep3b=nn.Sequential(
        nn.Conv2d(128,128,kernel_size=3,stride=1,padding=1),
        nn.ReLU(inplace=True),
        nn.Conv2d(128,128,kernel_size=3,stride=1,padding=1),
        nn.ReLU(inplace=True),
        nn.Conv2d(128,128,kernel_size=3,stride=1,padding=1),
        nn.ReLU(inplace=True),
        nn.Conv2d(128,512,kernel_size=1,stride=1,padding=0),
        nn.ReLU(inplace=True),
        nn.Conv2d(512,19,kernel_size=1,stride=1,padding=0))
   def forward(self,x):
        out_ConvLayerStep2=self.ConvLayerStep2(x)
        out_ConvLayerStep3a=self.ConvLayerStep3a(out_ConvLayerStep2)
        out_ConvLayerStep3b=self.ConvLayerStep3b(out_ConvLayerStep2)
        
        out=torch.cat((out_ConvLayerStep2,out_ConvLayerStep3a,out_ConvLayerStep3b),dims=1)
        return out

Step3:
Before to train or something like this I only want to visualize using print

model_pose = nn.Sequential(my_vgga19, RMPPE())
print(model_pose)

The Output is this

Sequential(
  (0): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace)
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU(inplace)
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace)
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU(inplace)
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU(inplace)
    (16): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (17): ReLU(inplace)
    (18): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (19): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU(inplace)
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU(inplace)
  )
  (1): RMPPE()
)

So my current status is I have no clue what is happening, is something is missing or my approach isn’t the correct, thanks for your attention and time to read this big post and any guide or help would be awesome.

The code should generally work.

Your model definition of RMPPE has a typo in its __init__ method, where you are using three underscores on the right hand side. If you fix this, model_pose will also contain the submodules of this model.

Also, in torch.cat you are using dims which should be dim :wink:

1 Like

Thanks for the guidance and sorry for my typos, I guest that is the issue when you use notebooks, thanks

1 Like