Negative dice loss and IoU score for segmentation

I wrote a forum post yesterday where I was running a Unet for retinal image segmentation with a resnet50 encoder backbone pretrained on imagenet.

Now that I have it training finally, I am getting a negative IoU score and a really large dice loss even though they are supposed to both be between 0 and 1. My input images are three channels but they look like grayscale images. I think maybe applying the same preprocessing from the resnet50 on imagenet (which are color images) may not be appropriate for gray images. When I look at an example of a training image after preprocessing, it has lost so much information.

This is my image preprocessing:

data_transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Resize((224, 224)),
    transforms.Normalize(mean=[0.485, 0.456, 0.406],
              std=[0.229, 0.224, 0.225])])

and my losses:

loss = smp.utils.losses.DiceLoss()
metrics = [
    utils.metrics.IoU(threshold=0.5),
]

My masks are size [1,224,224] with values ranging from 0 to 3 for each class and datatype torch.int64.

Edit: also, after preprocessing, a sample training image had a max pixel value of 1.6264 and a min pixel value of -1.9092. If I’m not mistaken, those values should be between 0 and 1, right?
Thanks

Ok I fixed it. I realized in my mask_transform:

mask_transform = transforms.Compose([
    transforms.PILToTensor(),
    transforms.ConvertImageDtype(torch.int64),
])

that ConvertImageDtype was transforming the values of the mask from 0,1,2,3 to massive 10+ digit long numbers. I just commented that line out and now I’m getting a dice loss and iou score between 0 and 1.