I’m trying to create a dataset of rotated MNIST images as inputs and the angles by which each one was rotated as outputs. Is there a way to do this without unsqueezing and squeezing each one?
When I try to do the following:
test = transforms.functional.rotate(dataset1.data[0], 90)
I get an error:
grid_sampler(): expected grid to have size 1 in last dimension, but got grid with sizes [1, 28, 28, 2]
This can be fixed by unsqueezing and squeezing the image
test = torch.unsqueeze(dataset1.data[0], 0)
test = transforms.functional.rotate(test, 90)
test = torch.squeeze(test, 0)
however this seems quite unnecessary and inefficient. Is there a way to avoid this?
I’m also not sure if this type of iterative approach is correct when it comes to creating this type of dataset, but is seems to me that there is no easy way to apply random rotation to all images, while keeping all of the angles saved somewhere (which is where torchvision.transforms.RandomRotation fails for me).