The trainloader size remains unchanged when concatenating data beforehand

I have a np.array of type uint8 of face images as input. Then I create two different transforms and apply them to the input. However, I encounter some problems with the dataloader:
The output size is not what I expected. It is 3136 regardless of whether I use augmentation or not.

train_data = torch.from_numpy(train_data)
train_data = rearrange(train_data, 'b h w c -> b c h w')

transform1 = T.Compose([
            T.TrivialAugmentWide(),
            lambda x : x.float(),
            T.Normalize(mean=[0.485, 0.456, 0.406], 
                        std=[0.229, 0.224, 0.225])
    ])
transform2 = T.Compose([
            T.ToTensor(),
            T.Normalize(mean=[0.485, 0.456, 0.406], 
                        std=[0.229, 0.224, 0.225])
    ])

train_dataset = ConcatDataset(datasets=[
            CustomDataset(data = train_data, labels = train_lab, transforms=transform1),
            CustomDataset(data = train_data, labels = train_lab, transforms=transform2)

    ])
train_dataloader = DataLoader(train_dataset, batch_size=parameters['batch_size'], sampler=sampler)

The dataset will usually apply the transformation on-the-fly on each sample in its __getitem__ method. You are using a CustomDataset, which is not defined in the posted code snippet, so you would need to double check it.
If you are also sticking to the common approach then the size won’t change if a transformation is used or not.