This error is usually raised e.g. if the forward method isn’t defined which can be the case if you have a typo or a wrong indentation in the code.
Your code is unfortunately not formatted properly (you can add code snippets by wrapping them into three backticks ```) so I don’t know what’s exactly failing.
Yes there’s no problem with creating the class, but if your super isn’t indented it will be an Object and not an nn.Module object. Is your class like,
class Start_RN50(torch.nn.Module):
def __init__(self, rn50):
super(Start_RN50, self).__init__()
self.start = torch.nn.Sequential(rn50.conv1, rn50.relu, rn50.maxpool)
def forward(self, x):
x = self.start(x)
return x
OR
class Start_RN50(torch.nn.Module):
def __init__(self, rn50):
super(Start_RN50, self).__init__() #<<<<<< this line
self.start = torch.nn.Sequential(rn50.conv1, rn50.relu, rn50.maxpool)
def forward(self, x):
x = self.start(x)
return x
class Middle_RN50(torch.nn.Module):
def __init__(self, rn50):
super(Middle_RN50, self).__init__()
self.start = torch.nn.Sequential(rn50.conv1, rn50.relu, rn50.maxpool,rn50.layer1, rn50.layer2,rn50.layer3, rn50.layer4)
def forward(self, x):
x = self.start(x)
return x
model = Middle_RN50(models.resnet50())
x = torch.randn(1, 3, 224, 224)
out = model(x)
print(out.shape)
# torch.Size([1, 2048, 7, 7])
I had to replace the unused model input argument with rn50 as this parameter was missing.
Given that I guess you are posting modified code here (as the execution would yield an error complaining about an undefined variable), which might not represent your error.
Your code still works after removing the undefined parts:
class Middle_RN50(torch.nn.Module):
def __init__(self, rn50):
super(Middle_RN50, self).__init__()
self.start = torch.nn.Sequential(rn50.conv1, rn50.relu, rn50.maxpool,rn50.layer1, rn50.layer2,rn50.layer3, rn50.layer4)
def forward(self, x):
x = self.start(x)
return x
class End_RN50(torch.nn.Module):
def __init__(self, rn50):
super(End_RN50, self).__init__()
self.avgpool = rn50.avgpool
self.fc = rn50.fc
def forward(self, x):
x = self.avgpool(x)
x = torch.flatten(x, 1)
x = self.fc(x)
return x
def calibrate_end(model):
rn50_middle = Middle_RN50(model)
rn50_middle.eval()
rn50_end = End_RN50(model)
rn50_end.eval()
images = torch.randn(2, 3, 224, 224)
temp = rn50_middle(images)
output = rn50_end(temp)
return output
model = models.__dict__['resnet50'](pretrained=False)
model.eval()
output = calibrate_end(model)
print(output.shape)
# torch.Size([2, 1000])
Generally, a minimal and executable code snippet can be simply copy-pasted to another environment, executed, and should reproduce the issue.
Your current code snippets still use undefined checkpoints, datasets etc. and removing them works fine.
Thanks for the update.
I would guess your use case fails since you are first fusing the entire model and are later trying to rip out smaller modules to create self.start.
Creating the nn.Sequential container with raw and fused modules should work: