Applying an image transform /data augumentations to a 2d FloatTensor / PIL

Dear All,
This is a bit long so bare with me …

I have a CNN trained on pseudo images sized 75 75 2. The “images” are taken from the Kaggle contest here: https://www.kaggle.com/c/statoil-iceberg-classifier-challenge and you can see my full code here: https://www.kaggle.com/solomonk/pytorch-gpu-cnn-bceloss-0-2198-lb

I use the following snippet for train test split:

class FullTrainningDataset(torch.utils.data.Dataset):
    def __init__(self, full_ds, offset, length):
        self.full_ds = full_ds
        self.offset = offset
        self.length = length
        assert len(full_ds)>=offset+length, Exception("Parent Dataset not long enough")
        super(FullTrainningDataset, self).__init__()
        
    def __len__(self):        
        return self.length
    
    def __getitem__(self, i):
        tItem=self.full_ds[i+self.offset]
        # print (type(tItem))
        img, label=tItem
        # print (type(img))
        
        # img=img.numpy()
        # print (type(img))
        # # img=Image.fromarray((img))
        # img=Image.fromarray(img.astype('uint8'))
        # print (type(img))
        # img=transform_train(img)
        # print (type(img))
        # # img = torch.from_numpy(np.asarray(img))
        # # print (type(img))
        return img, label
        
    
validationRatio=0.22    

def trainTestSplit(dataset, val_share=validationRatio):
    val_offset = int(len(dataset)*(1-val_share))
    print ("Offest:" + str(val_offset))
    return FullTrainningDataset(dataset, 0, val_offset), FullTrainningDataset(dataset, 
                                                                              val_offset, len(dataset)-val_offset)

If I am not wrong, the def __getitem__(self, i): is the only place in which I can use a transform of the form:

transforms.Compose([
                transforms.RandomCrop(XXX),
                transforms.RandomHorizontalFlip(),
            ]),

Note that I am also using a TemsorDataset and a torch.utils.data.DataLoader as follows:

dset_train = TensorDataset(train_imgs, train_targets)
train_ds, val_ds = trainTestSplit(dset_train)
train_loader = torch.utils.data.DataLoader(train_ds, batch_size=batch_size, shuffle=False,
                                            num_workers=1)
val_loader = torch.utils.data.DataLoader(val_ds, batch_size=batch_size, shuffle=False, num_workers=1)

However, since the data is not a real Image (has only two channels), all my attempts to convert the FloatTensor to a PIL image have failed. I want to convert it to a PIL image in order to apply several transforms.

How can this be done?
Feel free to fork my Script on Kaggle and run it, the specific code snipped with teh conversion commented out is:

def __getitem__(self, i):
        tItem=self.full_ds[i+self.offset]
        # print (type(tItem))
        img, label=tItem
        # print (type(img))
        
        # img=img.numpy()
        # print (type(img))
        # # img=Image.fromarray((img))
        # img=Image.fromarray(img.astype('uint8'))
        # print (type(img))
        # img=transform_train(img)
        # print (type(img))
        # # img = torch.from_numpy(np.asarray(img))
        # # print (type(img))
        return img, label

Thanks,