Can we add 2 type of classifier using transfer learning?

suppose my image contains 1 animal from a set of 10 animals and 1 bird from the set of 10 birds
can we add 2 types of classifier using transfer learning, like in 1 classifier it gives result what animal this image contains and in another classifier it gives which bird and then we can add both loss and do backpropagation?


Here animal classification and bird classification are two independent tasks ?

This you can easily do. Just get the output from two tasks. Calculate loss for them separately. While backpropagating, sum them and do. If the tasks are related it is good to do joint training. It helps in performance improvement. Even if the tasks are no related, you can see if there is any learning happening. In some cases, example one result being a subset of other. Both the tasks may look redundant to us. But having multi task learning with parallel related tasks does help in performance improvement.

can you show me how to do this with transfer learning? for example you can take VGG net

Take two vgg net and load the pretrained weight. Write a dataloader which gives you a pair of images one for task1 and other for task2. Pass them to the network and fine-tune them using loss.backward ()

see that’s the problem, I want to do it in same model
What I tried is I modified my output_layer and made it 20 class and used first 10 for animal and other 10 for birds, But I think that is wrong.
What I want to achieve now is same model but vgg.classification two

You can’t have the same model with different classes. It can be two instance of vgg model. It can’t be same.

then how can this architecture be done in code

For your task I think you can two instances of vgg itself. I don’t see a problem I that approach

See I am working on a Hindi language dataset, in Hindi an alphabet is mixed of a vowel and consonant, it can be both that’s why I am using this method, two instances are not working here

So you are saying a single alphabet can be consonant and vowel ? And you have totally how many classes for a single alphabet ?

10000 images , 10 vowels and 10 consonant

Ok from what I have understood, you can have a separate classifier for consonant and separate classifier for vowels. To both this classifier pass the same alphabet. The consonant classifier will classify the alphabet to a consonant and the vowel classifier will classify the alphabet to a vowel. Have some threshold for the confidence score in classifiers. So you can ignore values with least confidence.

yes but the hidden layers are same, and on top of that the out put of hidden layers will pass through 2 classifier that I can build with nn.Sequential

I think you can do it, lets say you are using a resnet50

model = models.resnet50(pretrained=True)
inp = model.fc.in_features
bottle_neck = nn.Linear(inp, 256)

classifier1 = nn.Linear(256, 10)
classifier2 = nn.Linear(256, 10)

class Flatten(nn.Module):
    def forward(self, x):
       x = x.view(x.size(0), -1)
       return x

feature_extractor = nn.Sequential(*(list(model.children())[:-1]), Flatten(), bottle_neck).to(device)

Now, In the training loop pass the same features to both the classifiers, compute the loss, sum them(may be balance with a hyper parameter) and backpropagate.


Your code looks like what I want to achieve , but I don’t understand where you used this classifier1 and classifier2 .

Oh ok, So i just explained that in words below the code.

output1 = classifier1(feature_extractor)
output2 = classfier2(feature_extractor)

loss1 = criterion(outputs1, labels1)
loss2 = criterion(outputs2, labels2)

total_loss = loss1 + (k * loss2)

This is what I meant above. Is this what you were asking ?

1 Like

Thank you so much this looks exactly what I wanted

Can you do a code review that will this work?

class Mymodel(nn.Module):
    def __init__(self):
        super(Mymodel, self).__init__()
        self.hidden_layer_output = nn.Sequential(*list(vgg.children())[:-1])
        self.vow_output = nn.Sequential(
                nn.Linear(512 * 7 * 7,4096),
                nn.Linear(4096, 4096),
                nn.Linear(4096, 10)
        self.const_output = nn.Sequential(
                nn.Linear(512 * 7 * 7,4096),
                nn.Linear(4096, 4096),
                nn.Linear(4096, 10)
    def forward(self, x):
        x = x.view(x.size(0),-1)
        x_vow_output = self.vow_output(x)
        y_const_output = self.const_output(x)
        return x_vow_output,y_const_output

The code seems ok to me.

Code looks good. Congrats