Training loss stopped decreasing after 10 epoch

I am doing regression problem for images using CNN.My training decreases for certain epoch but after 10 epochs, it stopped decreasing.It starts fluctuating around the loss at epoch 10.I tried decreasing lr too, but it is also not helpful.I increased the network complexity but it is not helping too.Anyone having any ideas?

My network is given below -

class Purity(nn.Module):
def init(self, dropout=True):
super(Purity, self).init()
print(“Purity predictor model”)
self.reg_features = nn.Sequential(OrderedDict([
(“conv1”, nn.Conv2d(6, 96, kernel_size=11, stride=4)),
(“relu1”, nn.ReLU(inplace=True)),
(“pool1”, nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True)),
(“norm1”, nn.LocalResponseNorm(5, 1.e-4, 0.75)),
(“conv2”, nn.Conv2d(96, 256, kernel_size=5, padding=2, groups=2)),
(“relu2”, nn.ReLU(inplace=True)),
(“pool2”, nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True)),
(“norm2”, nn.LocalResponseNorm(5, 1.e-4, 0.75)),
(“conv3”, nn.Conv2d(256, 384, kernel_size=3, stride=2)),
(“relu3”, nn.ReLU(inplace=True)),
(“conv4”, nn.Conv2d(384, 384, kernel_size=3, padding=1, groups=2)),
(“relu4”, nn.ReLU(inplace=True)),
(“conv5”, nn.Conv2d(384, 256, kernel_size=3, padding=1, groups=2)),
(“relu5”, nn.ReLU(inplace=True)),
(“pool5”, nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True)),
]))
self.regressor = nn.Sequential(OrderedDict([
(“fc8”, nn.Linear(33256, 4096)),
(“relu8”, nn.ReLU(inplace=True)),
(“drop6”, nn.Dropout() if dropout else Id()),
(“fc9”, nn.Linear(4096, 1024)),
(“relu9”, nn.ReLU(inplace=True)),
(“drop9”, nn.Dropout() if dropout else Id()),
(“fc10”, nn.Linear(1024, 1)),
(“sigmoid”, nn.Sigmoid())
]))

Well what you say is not meaningful.
The loss will always stop decreasing at some point. You should use other metrics to evaluate the quality of your regression.

@JuanFMontesinos…My loss decresed from 1 to 0.3 and it should decreases further as per my problem statement.

@Arfeen Did you use data augmentations while loading data using the data loader?

def load_data(root_path, dir, batch_size, phase):
    transform_dict = {
        'train': transforms.Compose(
        [transforms.RandomResizedCrop(224),
         transforms.RandomHorizontalFlip(),
         transforms.ToTensor(),
         transforms.Normalize(mean=[0.485, 0.456, 0.406],
                              std=[0.229, 0.224, 0.225]),
         ]),
        'test': transforms.Compose(
        [transforms.Resize(224),
         transforms.ToTensor(),
         transforms.Normalize(mean=[0.485, 0.456, 0.406],
                              std=[0.229, 0.224, 0.225]),
         ])}
    data = datasets.ImageFolder(root=root_path + dir, transform=transform_dict[phase])
    data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=True, drop_last=False, num_workers=4)
    return data_loader

Also, I would recommend trying more epochs as 10 epochs seem to be less given the complicated network.

@_joker …yes i am using both strong and weak augmentation on top of created Dataset object itself.Then used dataLoader for mini batch as part of my implmentation.I am using 50 epochs, at 10th epoch loss came down to 0.3 and after that it stopped decreasing.It starts fluctuating between 0.3 and 0.4