FNN Dimension Error

Hello, I’m trying to train a feedforward neural net for the CIFAR10 dataset. Whenever I run the cell to train the model, I get a dimension error. I used a Cross Entropy loss and SGD optimizer with a learning rate of 0.01.

When I transformed the RGB dataset into grayscale, interestly I was able to train the model. Some help would be great! Thanks.

Below is the code for the model and the error.

batch_size = 100
input_dim = 1024  
output_dim = 10  
hidden_dim = 500

class FNN(nn.Module):
    def __init__(self, input_dim, hidden_dim, output_dim):
        super(FNN, self).__init__()

        self.fc1 = nn.Linear(input_dim, hidden_dim) # has 2 params

        self.relu = nn.ReLU()

        self.fc2 = nn.Linear(hidden_dim, output_dim)
    
    def forward(self, x):
        # print("x size is {}".format(x.size()))
        # size output was [300, 1024]

        out = self.fc1(x)
        out = self.relu(out)
        out = self.fc2(out)
        return out

# fc1 Params size [500, 1024] and [500]
# fc2 params size [10, 500] and [10]

It seems that the error is raised due to a shape mismatch between the model output and the target.
Could you check their shapes before feeding them to the criterion and also post the code how you are creating these tensors?

The other issue regarding the shape mismatch for RGB and grayscale images, is that it seems you are defining in_features=1024 in your first layer. While this would fit the input dimension of a grayscale 32*32=1024 image, it will fail for a 3*32*32 image, so you would have to increase the number of input features by 3.

1 Like

Thanks for the help.
I checked the shape of images when training and it was [300, 1024]. I reshaped images into a 3d tensor [100, 3, 1024]. I think that solved my previous dim issue.

After reshaping I have another dim issue during the loss =criterion(outputs,labels) step.

ValueError: Expected target size (100, 10),
                     got torch.Size([100])

When i printed the size of label, it also was [100]. When i reshaped labels to [100,1], the shape in the ValueError above also changed to [100,1]. It looks like labels should have size [100,10]. Batch =100 and 10 classes? Any suggestions on how to debug this?

Below is the code for training.

model = FNN(input_dim,hidden_dim, output_dim)

criterion = nn.CrossEntropyLoss() 
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)

iteration = 1
for epoch in range(num_epochs):
    for i, (images, labels) in enumerate(trainloader):
       
        images = images.view(-1, 32*32).requires_grad_()
        images = images.reshape([100,3,1024])
        
        print('label size is {}'.format(labels.size())) 
        # [100]
        print('image size is {}'.format(images.size())) 
        # [100,3,1024]

        # clear gradients from previous iters w.r.t params
        optimizer.zero_grad()

        # forward pass to get outputs
        outputs = model(images)

        print('the output sizes {}'.format(outputs.size()))
        #[100,3,10]

        # Calculate loss
        loss = criterion(outputs, labels)

        # getting gradients w.r.t params
        loss.backward()

        # updating params
        optimizer.step()

        iteration += 1

Can you post your code for DataLoader specially trainloader or how you are creating the trainloader?
The labels shape [100,10] should be taken care by the DataLoader. If the data loaders are correct then 1 or 3 channel will also be taken care by the data loaders no need to change it manually.

The size of labels was already [100] when I loaded the data.

The code is below,

transform = transforms.Compose([ transforms.ToTensor(), 
     transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5)),
     ])

trainset = torchvision.datasets.CIFAR10(root='./data', 
                                        train=True,download=True, 
                                        transform=transform)

testset = torchvision.datasets.CIFAR10(root='./data', 
                                       train=False,
                                       download=True, 
                                       transform=transform)

trainloader = torch.utils.data.DataLoader(trainset, 
                                          batch_size=batch_size,
                                          shuffle=True, 
                                          num_workers=2)

testloader = torch.utils.data.DataLoader(testset, 
                                         batch_size=batch_size,
                                         shuffle=False, 
                                         num_workers=2)

dataiter = iter(trainloader)
images, labels = dataiter.next()

labels.size()
# [100]

I found the error in your code. :smile:
My bad the size of the labels should be [100].

Once you create the DataLoader creates [100,3,32,32] batch. Verify this with images.size() after the labels in above code.
The input dimensions for the model be(since its fc) [100,3x32x32].

So please make following changes:
comment this line images = images.view(-1, 32*32).requires_grad_()

and change this line images = images.reshape([100,3,1024])
TO THIS:
images = images.reshape([100,3*1024])

This should work.

Thanks for the help! [100, 33232] did the trick.

I posted the wrong output size earlier. I made the edits to my previous post. I should’ve been

 print('the output sizes {}'.format(outputs.size()))
        #[100,3,10]

So instead, I used,

images = Variable(images.view(-1, 3*32*32))

and I had to update the input_dim to 33232.

Thanks for the help!

1 Like