Is it possiable to for rewrite default_collection in DataLoder

I use DataLoder for my segmentation task. I set two image in a batch. one image in a batch may crop to create differement count of patches. So, When the code run default_collection function in DataLoder. I get an error at torch.stack(batch, 0, out=out) ->RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0.

if _use_shared_memory:
            # If we're in a background process, concatenate directly into a
            # shared memory tensor to avoid an extra copy
            numel = sum([x.numel() for x in batch])
            storage = batch[0].storage()._new_shared(numel)
            out = batch[0].new(storage)
        return torch.stack(batch, 0, out=out)

By debug. I found in batch, differement item have different shape. for example, batch[0].shape = (2, 3, 512, 512), batch[1].shape=(4, 3, 512, 512). as image in the batch create 2 patches and 4 patches. so directly stack them in dim0 cause that error.

I consider I need add a newaxies at dim0, to (1, 2, 3, 512, 512), (1,4,3,512,512) by unsqueeze. and then stack and squeeze. But That mean I need rewrite the func default_collection

Is there any sample way?

See collate_fn arg of https://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader