Here is all the datatypes I wish to use:
The image size is: torch.Size([1, 3, 1024, 1024]). The inputs data type is: torch.float32
The groundtruth mask size is :torch.Size([1, 1024, 1024]). The groundtruth mask type is: torch.float32
The predicted mask size is :torch.Size([1, 3, 1024, 1024]). The predicted mask type is torch.float32
And as I know, use BCELoss, the input and target should be the same size, which in my case, the predicted mask is a RGB image, while the target is a grayscale image.