Hi! I am new to PyTorch and I get this error when trying to create my first CNN:
I presume my tensors are of different shapes so hence the error right? How can I make tensors of equal size while preserving the 3 channels?
This is the simple model I was trying to run:
simple_model = nn.Sequential(nn.Conv2d(in_channels = 3,out_channels = 8,
kernel_size = 3, stride = 1, padding = 1),
for image, label in train_dl:
out = simple_model(image)
What is the size of the images in your batch? You may need to resize your images using a transform in your dataset before they can be fed into the network. I would guess that the error is telling you that the height dimensions of some of your images in the batch are not equal.
They vary, actually very few are squared:
Question is which function I can use to do this?
They don’t need to be squared per se, but all the images in your batch need to be the same to allow for batch processing.
For example this is a standard transform that would resize the images to be 224x224 and make it suitable for a imagenet trained classifier:
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
this would then be passed into the
transforms argument when you create your dataset and would resize and make those other transformations for you as well. You can see a worked out example of how this works at https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html.
Great now the tensors are reshaped, but still get an error when trying to parse the training set to the model:
Now I don’t know what this means…