Replacing CrossEntropyLoss() with nn.BCEWithLogitsLoss()

https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html

In this tutorial, what changes should one make to use nn.BCEWithLogitsLoss() instead of nn.CrossEntropyLoss() ?

nn.CrossEntropyLoss is used for a multi-class classification (or segmentation) use case, where each sample (or pixel) belongs to exactly one class. nn.BCEWithLogitsLoss is used for a binary or multi-label classification (or segmentation) where each sample (or pixel) belongs to zero, one, or multiple classes.
Depending on your actual use case (e.g. binary vs. multi-label) you would need to change the model output shape and the targets. E.g. for a binary classification your model would output logits in the shape [batch_size, 1] and the target would contain values in [0, 1] in the same shape.

Thank you very much, that cleared my doubt!