How to permute elements of a multi-dimensional Variable along a specific dimension

For some reasons, I still have to work on PyTorch 0.3.1.

For example, I have a batch of data. Its size = [16, 3, 100, 200]. Actually, they are RGB images.

I want to shuffle the pixels of an image. It doesn’t matter if I shuffle all the images the same way or not. I just need to randomly shuffle the pixels when training on each batch.

I just shuffle pixels among the same image on the same channel. Change the pixel position, but don’t change its RGB value.

That means:

for idx in range(batch_size):
        data[idx, :, :, :] = shuffle_an_image(data[idx, :, :, :])

Also, the image has an mask. I have to permute the mask the same way.
The data type is Variable. Actually, I only use shuffled images when computing loss function.

I hope someone can help me.

I’ve implemented a permuted MNIST example some time ago here.
You could use it as a starter and just change the shuffled indices to your shape.
Let me know, if that works for you.

Thanks a lot! That’s very helpful!
However, I can’t use torchvision.transforms. Because there is no joint transformation of multiple images.
For example, I have an input_rgb_image, a mask image, and a gt_image. Their pixels must be shuffled the same way.
I implement it like this (regard it as persuade code),

        size = rgb_image.size()
        perm = torch.randperm(size[2] * size[3])
        for idx in range(size[0]):
            for cn in range(size[1]):
                rgb_image[idx, cn, :, :] =  rgb_image[idx, cn, :, :].view(-1)[perm].view(size[2], size[3])
                mask[idx, cn, :, :] =  mask[idx, cn, :, :].view(-1)[perm].view(size[2], size[3])
                gt_image[idx, cn, :, :] =  gt_image[idx, cn, :, :].view(-1)[perm].view(size[2], size[3])

Is there any way to delete the two for loops?
for loop is very slow, right?