Unet doesnot work after dataaugmentation

Hello ,everyone, I was trying to do some segenmentation jobs with Unet. I have only 10 Images , and I use 8 of them as train Images, 2 of them as test. The images are greyscale images, and the mask(groundtruth) are binarized images(black stand for backgrounds, white for objects). I am using almost the most basic Unet. The test result is not so good but works, most of the pixels can be correctly predicted. So I decide to rotate the original Images and perform random crops on the original Images to enlarge the training set so that the test result can be improved. Unfortunatly, the network does not work after those operations. With random crops, the trained network cant even predict a single part(the predicted mask is all black). And with vertical or horizontal flip, the predicted result seems to be wrong, it is a combination of original mask and flipped mask. I have no idea what’s wrong with my implementation. Is Unet not working with those augmentations or something else?

Hope someone can give me some ideas.
Thanks

Firstly check if you have performed the exact same augmentation on the masks also. Otherwise, it is clear to me that data augmentation cannot improve generalization cause your model never generalised in the first place. With only 8 train images, it over fitted the data and because the validation was so similar it performed alight. 10 images is far too less for any good outcome.

You can find a small example of using the same “random” transformations here, which would be important as @bluesky314 described.

I have doubt if I call the transform function twice, the operation will change or not.This is what I have:

class Data_set():
    def __init__(self,transform):
        imgs,masks = make_dataset(imgpath,maskpath)
        self.transform = transform
        self.imgs = imgs
        self.masks = masks
    def __len__(self):
        return len(self.imgs)
    def __getitem__(self,index):
        imgpath,maskpath = self.imgs[index],self.masks[index]
        img_x = Image.open(imgpath)
        mask_x = Image.open(maskpath)
        img_x = self.transform(img_x)#.float()
        mask_x = self.transform(mask_x)#.long()
        return img_x,mask_x

I have earlier tried this way:

x, y = self.transform(image, mask)

but I dont know why that comes an error, so I call that twice.

If self.transform applies random transformations, both calls will use different random transformations, which will mess up the data to target correspondence.
In my example, I’ve used the functional API to only draw the random parameters once and apply them on the image as well as on the target.

This solve the problem on vertical/horizontal flip. Thanks a lot.
But with crop, the same problem happens again. I am not sure if it’s caused by lacking of training or something else.@ptrblck

I assume you’ve used the example code to crop your data.
If so, maybe try to relax the cropping a bit, i.e. just sample small numbers and try to see, if the cropping operation might be causing trouble training your model.

I just tested, the problem is not caused by the cropping. Thanks for help.@ptrblck