I assume you have two different outputs in your model, i.e. one using the nn.BCELoss and the other for the nn.CrossEntropyLoss?
Now one part of your model learn quite good, while the other gets stuck?
A weighting of these losses might be a good idea.
Could you compare the ranges of both losses and try to rescale them to a similar range?
Also, as a small side note, if you are using nn.CrossEntropyLoss for classification, you should pass the logits to this criterion, not the probabilities using nn.Softmax.