Change MSELoss to cross entropy

should I change MSELoss to cross entropy?

criterion = torch.nn.MSELoss(reduction=‘mean’)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4
metrics = {‘f1_score’: f1_score, ‘auroc’: roc_auc_score}

I have the rgb image and mask of 32 bit, they are divided into two classes(background and the thing), why I got this one? how can I change the MSELoss to cross entropy?

File “/content/DeepLabv3FineTuning/trainer.py”, line 58, in train_model
metric(y_true.astype(‘uint8’), y_pred))
File “/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_ranking.py”, line 390, in roc_auc_score
sample_weight=sample_weight)
File “/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_base.py”, line 77, in _average_binary_score
return binary_metric(y_true, y_score, sample_weight=sample_weight)
File “/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_ranking.py”, line 221, in _binary_roc_auc_score
raise ValueError("Only one class present in y_true. ROC AUC score "
ValueError: Only one class present in y_true. ROC AUC score is not defined in that case.

If you are working on a multi-class segmentation use case, nn.CrossEntropyLoss would be the preferred loss function.
Your error is raised by a sklearn metric method and is unrelated to the criterion in PyTorch.