Transfer Learning - VGG - MultiClass and MultiLabel, Very low accuracy rates

Hi guys,

I am trying to perform transfer learning by customizing VGG to do a multi-class and multi-label classification.

Here is how the modified model looks like:

Model(
  (conv_model): VGG(
    (features): Sequential(
      (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): ReLU(inplace=True)
      (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (3): ReLU(inplace=True)
      (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
      (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (6): ReLU(inplace=True)
      (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (8): ReLU(inplace=True)
      (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
      (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (11): ReLU(inplace=True)
      (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (13): ReLU(inplace=True)
      (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (15): ReLU(inplace=True)
      (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
      (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (18): ReLU(inplace=True)
      (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (20): ReLU(inplace=True)
      (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (22): ReLU(inplace=True)
      (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
      (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (25): ReLU(inplace=True)
      (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (27): ReLU(inplace=True)
      (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (29): ReLU(inplace=True)
      (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
    (classifier): Sequential(
      (0): Linear(in_features=25088, out_features=4096, bias=True)
      (1): ReLU(inplace=True)
      (2): Dropout(p=0.5, inplace=False)
      (3): Linear(in_features=4096, out_features=4096, bias=True)
      (4): ReLU(inplace=True)
    )
  )
  (classifiers_0): Sequential(
    (0): Linear(in_features=4096, out_features=10, bias=True)
  )
  (classifiers_1): Sequential(
    (0): Linear(in_features=4096, out_features=11, bias=True)
  )
  (classifiers_2): Sequential(
    (0): Linear(in_features=4096, out_features=11, bias=True)
  )
  (classifiers_3): Sequential(
    (0): Linear(in_features=4096, out_features=11, bias=True)
  )
  (classifiers_4): Sequential(
    (0): Linear(in_features=4096, out_features=11, bias=True)
  )
  (classifiers_5): Sequential(
    (0): Linear(in_features=4096, out_features=11, bias=True)
  )
  (classifiers_6): Sequential(
    (0): Linear(in_features=4096, out_features=11, bias=True)
  )
  (classifiers_7): Sequential(
    (0): Linear(in_features=4096, out_features=11, bias=True)
  )
  (classifiers_8): Sequential(
    (0): Linear(in_features=4096, out_features=11, bias=True)
  )
  (classifiers_9): Sequential(
    (0): Linear(in_features=4096, out_features=11, bias=True)
  )
  (classifiers_10): Sequential(
    (0): Linear(in_features=4096, out_features=11, bias=True)
  )
)

I have created a Custom Dataset that returns Tensors. I pass it through the model and calculate loss. I am using Cross Entorpy Loss for multi-label classification, and using adam optimizer. Though, my accuracy value are only coming around 10%. The dataset is quite large.

I have played around with the learning rate to no use… Any pointers are much appreciated.

Thank you.

Could you explain your use case a bit more?
It seems your model uses 10 different heads, each with 11 different classes (classifiers_0 uses 10 classes only).

Does “multi-class and multi-label” mean that multiple “heads” might be active, but each head has to return a single class?

Yes, The classifier 0 identifies the number of labels and the rest are basically guessing the labels for a list of 11 class labels.

Is the “rest” predicting the same labels? I.e. is class0 referring to the same class in all 10 heads?
If so, how are you processing the output, if all heads predict different labels and e.g. classifier_0 predicts 0 valid labels?

may be i didn’t do a good job explaning it… here you go:

example: extract all the letters in a natural scene(given there are only 11 alphabet)

classifier 0 -> length of the word(max 10)
classifier 1 - 10 -> letter that is applicable.

does that help ??