RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 4, 4], but got 3-dimensional input of size [3, 64, 64] instead

I think the Dataloader is not taking 4-dimensions(missing batch_size).

The images are 64x64 RGB.

It seems you are manually slicing the returned sample, which might be the data tensor, since you are creating the label manually.
Could you check b_size and the shape of sample?

sample size is torch.Size([128, 3, 64, 64]) #128 is batch_size
b_size is 3

In that case, just assign real_cpu to the sample or use sample directly for your forward pass.

I was thinking about that, but the DCGAN tutorial here in pytorch is using the same type of code. Though they used data.ImageFolder to create dataloader which is not working in my case so I manually created the dataloader by taking self.item in dataset class and then parameters like batch_size, shuffle…
Please take a look

In the DCGAN code, data contains the input and target tensor as a tuple.
Thus the code uses indexing to get the input sample using data[0].
If your DataLoader and Dataset only returns the input sample without a target, you don’t need this indexing.

1 Like

Hey, it worked fine! Thanks for the hack!