Is this refer to overfitting?

Sorry to ask this stupid question. But I have trid a lot and I still can’t understand how could this happen.
First of all, I try to use resnet-18 working on age classifier. And the age class is five. I selected the CACD dataset which preprocesses by mtcnn (crop and align). Then I split the dataset to two groups and 0.9 for training, others for testing. And some settings are as follow:
optimizer: Adam;
Learning_rate: 1e-5;
BZ: 128;
Actually, at first, the val acc is increased as I expect. But when the acc arrived at 50%. It keeped the acc for the whole training. Yeah, the training acc is 98%, but it still 50%. I’m not sure if my code of training is wrong. Since the val didn’t change. Then, I used training data to train, also to val. Indeed, it works very good, which means there maybe are no wrongs with my training code? And there is my code of my “stupid” model.

def resnet(arch, block, layers, pretrained, progress, **kwargs):
model = ResNet(block, layers, **kwargs)
if pretrained:
state_dict = torch.load(RESNET_MODEL_PATH)
for key, value in state_dict.items():
if ‘fc’ in key:
continue
model.state_dict()[key].copy
(state_dict[key])
return model

class AgeClassifier(nn.Module):
def init(self):
super(AgeClassifier, self).init()
self.resnet = _resnet(‘resnet18’, BasicBlock, [2, 2, 2, 2], pretrained=True, progress=True,
num_classes=NUM_CLASS)

def forward(self, x):
    logit_age = self.resnet(x)
    return logit_age

def train_net(pretrain=False):
logging.basicConfig(level=logging.INFO,
format=’%(asctime)s - %(name)s-%(levelname)s-%(message)s’,
filename=‘display.log’)
logger = logging.getLogger(name)

model = AgeClassifier()
if pretrain:
    model_path = getModel(AGE_MODEL_OUTPUT)
    model.load_state_dict(torch.load(model_path))
    print('load model:{}'.format(model_path.split('/')[-1]))
model.to(DEVICE)
train_loader, test_loader = dataPre(mode='train')

optimizer = opt.Adam(model.parameters(), AGE_LR, betas=(0.5, 0.999))

###################
#    Age_Loss     #
###################
age_cls_criterion = nn.CrossEntropyLoss()

train_acc, num_class_age = 0, 0
for epoch in range(AGE_EPOCH):
    model.train()
    for batch_idx, data in enumerate(train_loader):
        feature, age_label = data
        feature = feature.to(DEVICE)
        age_label = age_label.to(DEVICE)

        optimizer.zero_grad()

        age_pre = model(feature)
        loss = age_cls_criterion(age_pre, age_label)

        loss.to(DEVICE)
        loss.backward()

        optimizer.step()
        out_age = torch.max(age_pre, 1)[1]

        train_acc += (out_age == age_label).sum()
        num_class_age += age_label.size(0)

I assume you are referring to the training and validation accuracy, respectively?
If so, it seems your model is indeed overfitting.
You could try to increase the regularization, more aggressive data augmentation, make sure both data splits contain images from the same domain and the same preprocessing.

Hi Ptrblck,
Yes, these refer to training and validation accuracy. I’m sure the data comes from the same domain and the same preprocessing. Actually, I have tried to do some data augmentation, and maybe it works not as my expect. Indeed, after the data augmentation, the validation accurancy arised. However, it rise at 60% and stop at that point.