Hi there,
I’m quite new to torch, so maybe it’s sth simple. I’ve trained my net, which is:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.pool = nn.MaxPool2d(4, 4)
self.conv2 = nn.Conv2d(6, 16, 5)
self.pool = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(44896, 4096)
self.fc2 = nn.Linear(4096, 1024)
self.fc3 = nn.Linear(1024, 251)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(x.size(0), 44896)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return
And now I want to remove last layer (fc3), to make a feature extractor out of my net. So I’m loading the trained model, and removing last layer:
#loading model
model = torch.load('./net_100_test_train')
model.eval()
#loading, preparing and normalizing a test image
img=mpimg.imread('./0298_0040_0001.png')
img = np.array(img*2-1)
img = np.tile(img, (1, 1, 1))
img = torch.from_numpy(img)
img = torch.unsqueeze(img, 1)
#removing last layer
new_model = nn.Sequential(*list(model.children())[:-1])
with torch.no_grad():
output = new_model(img.to(device))
which is giving me an error:
RuntimeError: size mismatch, m1: [1952 x 93], m2: [44896 x 4096] at c:\a\w\1\s\windows\pytorch\aten\src\thc\generic/THCTensorMathBlas.cu:266
Is this because I’m using Sequential? How should I remove last layer properly, and be able to extract features?
edit:
I think I workaround it by replacing the last layer with an Identity layer, but the layer itself is still there.