Regression prediciton out of range

HI there,
I’m trying to use transfer learning with VGG16 and imagenet weights for a regression problem. I’m changing the last layer for a linear layer with output size 1 (see code below).

However, the prediction of my network are super weird. My training values are property prices, hence >>0 and my network predicts only values <0.

Any thoughts on this ?

# Need to change dataloader to bring class model
model = models.vgg16_bn(pretrained=True)
criterion = nn.MSELoss() #nn.CrossEntropyLoss() #nn.MSELoss()
optimizer = optim.Adam(model.parameters())

# Freeze training for all layers
for param in model.features.parameters():
    param.require_grad = False

# Newly created modules have require_grad=True by default
num_features = model.classifier[6].in_features
features = list(model.classifier.children())[:-1] # Remove last layer
features.extend([nn.Linear(num_features, 1)]) # Number of output necessary
model.classifier = nn.Sequential(*features) # Replace the model classifier

if use_gpu:
    model.cuda()
num_epochs = 20
dir_ = 'Weigths/'
vgg16 = train_model(model, criterion, optimizer, num_epochs=num_epochs, classification=False)

I’ve checked out the dataloader and its feeding the right label …

Thanks !

Hi,

You need to normalize your target (property prices) values, i guess your loss function must be exploding. Neural network models are incapable of generating that kind of large values. Another way is to convert this problem to classification task (i.e. class 1 (0 < property price =< 500), class 2 (500 < property price =< 1000)… something like that.).