Issue with utils.save_image

I want to use SPADE to do image synthesis. My input is a bunch of .tif files, which I need to convert to jpeg. I wrote the following script :

for img in img_files:
tempstrg = img.split("_")
lab = [fl for fl in label_files if (tempstrg[1] in fl and tempstrg[2] in fl)]
image = Image.open(path_image + img)
label = Image.open(path_label + lab[0])
image = transforms.ToTensor()(image)
label = transforms.ToTensor()(label)
print(“uniques in image :”, len(np.unique(np.array(image))))
print(“uniques in label :”, len(np.unique(np.array(label))))
utils.save_image(image, img_jpg + str(index) + “.jpg”)
utils.save_image(label, label_jpg + str(index) + “.jpg”)
index = index + 1

expectedly, the label map has a very low number of unique values (6 in my case), so all is well. But when I do

for f in files:
label = Image.open(path + f)
label = transforms.ToTensor()(label)
print(label.size())
label = np.array(label)
print(len(np.unique(label)))

the number of label doesn’t stay the same, and SPADE training crashes because of it. Is there some sort of normalization / interpolation done somewhere that I’m not aware of ?

ToTensor will normalize the input image, such that its output will have values in the range [0, 1].
While this is expected for input image tensors, this might be problematic, if you are dealing with mask images, which contain class indices.
Instead of ToTensor you should probably create the tensor via torch.from_numpy, which should keep the values. Also, I’m not sure, how save_image will act on the mask, so you should double check, that it doesn’t change the values either.

I ended up doing something very simple, converting to a numpy array and saving with PIL. But thanks for the heads up in any case !