Dataloader and transform on loaded dataframe


I am extremely new to PyTorch and would need a little guidance/clarification.

The tutorial I am inspiring myself from to build my CNN is: Training a Classifier — PyTorch Tutorials 1.10.0+cu102 documentation

I am trying to build my own CNN using a local dataset. The dataset consists of roughly 10,000 images of 56x56.

I have loaded the data using pickle as follows:

with open("./images_l.pkl", 'rb') as f: imgs = pickle.load(f)
with open("./labels_l.pkl", 'rb') as f: labels = pickle.load(f)

From the tutorial, it seems as though we want to wrap the dataset in a data loader. So I convert the NumPy array into a dataset and then wrap it in a dataloader as follows:

trainset =, torch.tensor(labels))
trainloader =, batch_size=4)

After doing what’s above, defining + instantiating the CNN and setting the appropriate loss function/optimizer, I attempt to train the network, but there seems to be a mismatch with the dimensions which, I assume, is because the transformation is not applied.

Error message:

Expected 4-dimensional input for 4-dimensional weight [6, 1, 5, 5], but got 3-dimensional input of size [4, 56, 56] instead

Any guidance would be greatly appreciated.


The channel dimension is missing in your inputs, so you would need to add them via torch.from_numpy(imgs).unsqueeze(1).

Thank you for your reply!

I was unaware that it is always recommended to use Custom Datasets when not using a PyTorch dataset.
After creating the custom dataset class and instantiating it, I have maintained the desired dimensions, but the images are losing the 3 RGB channels and the images being converted to binary for some reason. I think it’s due to my transforms, but still figuring that out.

You don’t necessarily need a custom Dataset, if you unsqueeze the missing dimension before passing the tensor to TensorDataset.
Based on the error it seems the loaded numpy array is already missing the channel dimension (i.e. the RGB channels) so you could check how these arrays are created.