Hey all! I’m using the MNIST dataset available through torchvision
and trying to use transform operations to create synthetic data.
In addition to a regular train_set
where I only used transforms.ToTensor()
, I wrote the following with the intention of appending it to the original train_set
:
train_set2 = torchvision.datasets.MNIST(
root='./data',
train=True,
download=True,
transform=transforms.Compose([
transforms.RandomAffine(degrees=20,
translate=(0.9, 0.9),
scale=(0.9, 1.1),
shear=(-20, 20)),
transforms.ToTensor()
])
)
However, when I view the images that are produced through the extracting and transforming of the dataset there does not appear to be any difference in how the images look at all.
For example, the following are my results:
plt.imshow(train_set.data[0])
plt.imshow(train_set2.data[0])
Any clarification would be greatly appreciated!