Hey, I’ve been thinking of adding an additional maxpool layer after the first layer of Resnet50, thinking so I was thinking of slicing the network and adding a layer in between.
But even when I slice the network and join it again. It is throwing an error.
model = models.resnet50(pretrained=True)
initfour=list(model_ft.children())[0:5]
restlast=list(model_ft.children())[5:]
combi=initfour+restlast
nnmod=torch.nn.Sequential(*combi)
print(summary(nnmod.cuda(), (3, 300, 600)))
error
/home/ayush/anaconda3/envs/pytorch_1.8.1/lib/python3.6/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1751 if has_torch_function_variadic(input, weight):
1752 return handle_torch_function(linear, (input, weight), input, weight, bias=bias)
-> 1753 return torch._C._nn.linear(input, weight, bias)
1754
1755
RuntimeError: mat1 dim 1 must match mat2 dim 0
when I try to print both the original and this model, they are the same. What am I doing wrong here?
Your nn.Sequential
container is missing the functional API calls used in the original forward
method, e.g. this flatten
operation will be missing and will raise errors.
Either add all functional calls as new modules into the nn.Sequential
container or write a new custom module with a new forward
method.
you mean like this?
class myModel(torch.nn.Module):
def __init__(self):
super(myModel,self).__init__()
model=models.resnet50(pretrained=True)
modList=list(model.children())[0:5] + list(model.children())[5:]
self.Conv1 = torch.nn.Sequential(*modList)
def forward(self,x):
x = self.Conv1(x)
return x
what I was previously trying to do is add maxpool after the first block of pretrained Resnet50, I wanted to do this because I wanted to model to have pretrained weights.
Is this the right way to do this?
class myModel(torch.nn.Module):
def __init__(self):
super(myModel,self).__init__()
model_ft=models.resnet50(pretrained=True)
self.Conv1 = torch.nn.Sequential(*list(model_ft.children())[:5])
self.Conv2 = torch.nn.Sequential(*list(model_ft.children())[5:9])
self.adaptivemaxpool = torch.nn.AdaptiveMaxPool2d((38, 75))
self.fc = torch.nn.Sequential(torch.nn.Linear(2048, 3))
def forward(self,x):
x = self.Conv1(x)
x = self.adaptivemaxpool(x)
x = self.Conv2(x)
x = x.reshape(x.shape[0], -1)
x = self.fc(x)
return x
I’m not facing the initial issue now.
No, I don’t think these codes are equivalent to a standard ResNet.
The first code snippet doesn’t flatten the actvation while the second one seems to be missing the pooling layer before the linear layer.
Since you’ve added a new pooling after layer1
, this might however be expected and might work.