Help me decreasing loss

Hello Community, :wave: :slightly_smiling_face:
I have recently started deep learning, and was trying a project of image classification using transfer learning.
I used InceptionResnetV2 model in pytorch (from this repo ) pretrained on imagenet dataset.
Sadly, even after 40 epochs, the accuracy is somewhere around 15% (Yes, I used ReduceOnPlateaue LR Schedular). I am not sure if it is because of some bug that I might have accidently created in code while writing training script, or any other silly mistakes. Attached is the training part of my code with loss & accuracy details per epoch (full script at the bottom):

from tqdm.autonotebook import tqdm
from copy import deepcopy

n_epoch = 40
model.train()
epoch_losses = {
    'val': [],
    'train': []
}

epoch_accs = {
    'val': [],
    'train': []
}

best_acc = 0.0
best_model = deepcopy(model.state_dict())


for epoch in range(n_epoch):
  epoch_loss = 0.0
  epoch_acc = 0.0
  print('=' * 25, '[Epoch:', epoch+1, '/', n_epoch, ']', '=' * 25)

  for phase in ['train', 'val']:
    if phase == 'train':
      model.train()
    else:
      model.eval()

    running_loss = 0.0
    running_correct = 0.0

    for data in tqdm(snakeDataloader[phase], position=0, leave=False):
      img, label = data['img'], data['label']
      img = img.to(device)
      label = torch.tensor(label).to(device)

      optimizer.zero_grad()

      with torch.set_grad_enabled(phase=='train'):
        outs = model(img)
        preds = torch.argmax(outs, 1)
        loss = criterion(outs, label)

        if phase == 'train':
          loss.backward()
          optimizer.step()
        
        running_loss += loss.item() * img.size()[0]
        running_correct += torch.sum(preds == label.data)
    
    epoch_loss = running_loss / len(snakeDataloader[phase])
    epoch_acc = running_correct.double() / len(snakeDataloader[phase])

    print(f'[{phase}] => Acc: {epoch_acc :.2f}  Loss: {epoch_loss :.2f}')

    epoch_losses[phase].append(epoch_loss)
    epoch_accs[phase].append(epoch_acc)

    if phase == 'val' and epoch_acc > best_acc:
      best_acc = epoch_acc
      best_model = deepcopy(model.state_dict())
    elif phase == 'train':
      scheduler.step(epoch_loss)


========================= [Epoch: 1 / 40 ] =========================
[train] => Acc: 5.92  Loss: 255.16
[val] => Acc: 8.46  Loss: 208.18
========================= [Epoch: 2 / 40 ] =========================
[train] => Acc: 8.82  Loss: 201.05
[val] => Acc: 8.35  Loss: 209.16
========================= [Epoch: 3 / 40 ] =========================
[train] => Acc: 9.70  Loss: 196.58
[val] => Acc: 10.73  Loss: 198.33
========================= [Epoch: 4 / 40 ] =========================
[train] => Acc: 10.25  Loss: 194.29
[val] => Acc: 10.69  Loss: 197.92
========================= [Epoch: 5 / 40 ] =========================
[train] => Acc: 10.38  Loss: 192.18
[val] => Acc: 10.92  Loss: 195.03
========================= [Epoch: 6 / 40 ] =========================
[train] => Acc: 11.51  Loss: 188.25
[val] => Acc: 10.92  Loss: 193.92
========================= [Epoch: 7 / 40 ] =========================
[train] => Acc: 10.44  Loss: 190.62
[val] => Acc: 10.92  Loss: 193.32
========================= [Epoch: 8 / 40 ] =========================
[train] => Acc: 11.74  Loss: 187.99
[val] => Acc: 10.85  Loss: 194.34
========================= [Epoch: 9 / 40 ] =========================
[train] => Acc: 11.41  Loss: 188.46
[val] => Acc: 11.35  Loss: 194.20
========================= [Epoch: 10 / 40 ] =========================
[train] => Acc: 11.93  Loss: 185.79
[val] => Acc: 11.35  Loss: 197.31
========================= [Epoch: 11 / 40 ] =========================
[train] => Acc: 12.20  Loss: 183.68
[val] => Acc: 10.92  Loss: 199.71
========================= [Epoch: 12 / 40 ] =========================
[train] => Acc: 11.69  Loss: 189.29
[val] => Acc: 12.04  Loss: 190.32
========================= [Epoch: 13 / 40 ] =========================
[train] => Acc: 12.49  Loss: 183.14
[val] => Acc: 11.58  Loss: 191.34
========================= [Epoch: 14 / 40 ] =========================
[train] => Acc: 11.92  Loss: 184.91
[val] => Acc: 11.85  Loss: 192.28
========================= [Epoch: 15 / 40 ] =========================
[train] => Acc: 12.59  Loss: 183.28
[val] => Acc: 12.23  Loss: 189.71
========================= [Epoch: 16 / 40 ] =========================
[train] => Acc: 12.21  Loss: 183.75
[val] => Acc: 11.58  Loss: 190.22
========================= [Epoch: 17 / 40 ] =========================
[train] => Acc: 12.57  Loss: 182.45
[val] => Acc: 11.65  Loss: 192.05
========================= [Epoch: 18 / 40 ] =========================
[train] => Acc: 13.02  Loss: 182.48
[val] => Acc: 11.35  Loss: 192.22
========================= [Epoch: 19 / 40 ] =========================
[train] => Acc: 12.30  Loss: 182.98
[val] => Acc: 12.38  Loss: 189.69
========================= [Epoch: 20 / 40 ] =========================
[train] => Acc: 12.70  Loss: 182.28
[val] => Acc: 12.62  Loss: 188.30
========================= [Epoch: 21 / 40 ] =========================
[train] => Acc: 12.31  Loss: 181.50
[val] => Acc: 12.92  Loss: 186.95
========================= [Epoch: 22 / 40 ] =========================
[train] => Acc: 12.98  Loss: 180.89
[val] => Acc: 12.04  Loss: 191.68
========================= [Epoch: 23 / 40 ] =========================
[train] => Acc: 12.57  Loss: 182.21
[val] => Acc: 12.54  Loss: 188.88
========================= [Epoch: 24 / 40 ] =========================
[train] => Acc: 13.28  Loss: 178.77
[val] => Acc: 11.81  Loss: 192.00
========================= [Epoch: 25 / 40 ] =========================
[train] => Acc: 12.80  Loss: 179.73
[val] => Acc: 12.69  Loss: 189.20
========================= [Epoch: 26 / 40 ] =========================
[train] => Acc: 13.39  Loss: 178.53
[val] => Acc: 13.50  Loss: 186.30
========================= [Epoch: 27 / 40 ] =========================
[train] => Acc: 13.13  Loss: 179.55
[val] => Acc: 11.88  Loss: 191.30
========================= [Epoch: 28 / 40 ] =========================
[train] => Acc: 12.74  Loss: 180.83
[val] => Acc: 11.50  Loss: 192.44
========================= [Epoch: 29 / 40 ] =========================
[train] => Acc: 13.61  Loss: 178.84
[val] => Acc: 12.38  Loss: 186.29
========================= [Epoch: 30 / 40 ] =========================
[train] => Acc: 13.67  Loss: 177.28
[val] => Acc: 12.69  Loss: 188.18
========================= [Epoch: 31 / 40 ] =========================
[train] => Acc: 13.31  Loss: 179.06
[val] => Acc: 11.69  Loss: 190.83
========================= [Epoch: 32 / 40 ] =========================
[train] => Acc: 13.67  Loss: 177.76
[val] => Acc: 12.96  Loss: 188.28
========================= [Epoch: 33 / 40 ] =========================
[train] => Acc: 13.98  Loss: 177.72
[val] => Acc: 12.81  Loss: 186.19
========================= [Epoch: 34 / 40 ] =========================
[train] => Acc: 13.23  Loss: 178.71
[val] => Acc: 12.62  Loss: 186.99
========================= [Epoch: 35 / 40 ] =========================
[train] => Acc: 13.26  Loss: 178.14
Epoch    35: reducing learning rate of group 0 to 1.0000e-03.
[val] => Acc: 12.19  Loss: 187.05
========================= [Epoch: 36 / 40 ] =========================
[train] => Acc: 13.92  Loss: 174.30
[val] => Acc: 13.23  Loss: 184.83
========================= [Epoch: 37 / 40 ] =========================
[train] => Acc: 14.70  Loss: 172.65
[val] => Acc: 13.08  Loss: 183.59
========================= [Epoch: 38 / 40 ] =========================
[train] => Acc: 14.64  Loss: 172.27
[val] => Acc: 13.38  Loss: 183.50
========================= [Epoch: 39 / 40 ] =========================
[train] => Acc: 14.79  Loss: 172.33
[val] => Acc: 13.12  Loss: 183.73
========================= [Epoch: 40 / 40 ] =========================
[train] => Acc: 15.10  Loss: 170.81
[val] => Acc: 13.27  Loss: 182.62

Full notebook can be found here

P.S. I have already gone through posts with similar title (eg this) but they didn’t seem any helpful with this case.

@ptrblck ?

Can you show your model? How are you tweaking it to your particular application? Usually, people replace the last FC layer with their own layer enabling the model to learn the low-level features specific to the particular dataset. Try replacing your FC layer and then train it.

Yes, here is the colab notebook that I used. and this is source of my model (which automatically downloads the weights). For last layer, I did the following:

model.last_linear = nn.Sequential(
    nn.Linear(1536, 700),
    nn.ReLU(inplace=True),
    nn.Linear(700, 35)
)