Loss consistently between 0.7 and 0.8 with around 50% accuracy

Hi all, my losses for training and validation are constantly between 0.7 and 0.8 while accuracy is around 50%.
Am using transfer learning so in the training phase: source’s classification loss, together with the distance loss from difference between source dataset and target dataset are used.

Does anyone have any advice what to do with the stagnating loss? Newbie here so would appreciate any advice, however obvious it may seem. Thanks.

        with torch.set_grad_enabled(phase=='source_train'):  
            features=model(inputs)
            classifier_layer=nn.Linear(8000,2)
            outputs=classifier_layer(features)
            if phase!='source_train':
                logits=outputs
                loss=loss_func(logits,labels)
            else:
                logits=outputs.narrow(0,0,labels.size(0))
                classifier_loss=loss_func(logits,labels)
                distance_loss=DAN(features.narrow(0, 0, labels.size(0)),features.narrow(0, labels.size(0), inputs.size(0)-labels.size(0)))
                loss=classifier_loss+distance_loss

Try printing out the labels and see if they are changing every 10/20 steps depending upon the number of steps per epoch.