Error with CNN, mismatch between target and input size

Hi there,

I am trying to implement my first CNN to classify pathological and healthy images.
During the training, I have an error that I can’t correct.
Despite searching on google I can’t find a solution.
Is there a kind soul who could help me? :innocent: :pray:

The code is as follows:

#def train_net(n_epoch): # Training our network
losses =
for epoch in range(n_epoch): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(train, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data

    # zero the parameter gradients
    optimizer.zero_grad()

    # forward + backward + optimize
    
    outputs = model(inputs)
    loss = criterion(outputs, labels)
    loss.backward()
    optimizer.step()

    # print statistics
    losses.append(loss)
    running_loss += loss.item()
    if i % 100 == 99:  # print every 2000 mini-batches
        print('[%d, %5d] loss: %.10f' %
              (epoch + 1, i + 1, running_loss / 2000))
        running_loss = 0.0

plt.plot(losses, label=‘Training loss’)
plt.show()

ValueError Traceback (most recent call last)
/var/folders/7x/f_g9k1797vn0y8xdfwzz10gc0000gn/T/ipykernel_71356/646090411.py in
13
14 outputs = model(inputs)
—> 15 loss = criterion(outputs, labels)
16 loss.backward()
17 optimizer.step()

~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
→ 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = ,

~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
610
611 def forward(self, input: Tensor, target: Tensor) → Tensor:
→ 612 return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
613
614

~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction)
3054 reduction_enum = _Reduction.get_enum(reduction)
3055 if target.size() != input.size():
→ 3056 raise ValueError(
3057 "Using a target size ({}) that is different to the input size ({}) is deprecated. "
3058 “Please ensure they have the same size.”.format(target.size(), input.size())

ValueError: Using a target size (torch.Size([32])) that is different to the input size (torch.Size([32, 1, 128, 2])) is deprecated. Please ensure they have the same size.

Based on the error message it seems your model is returning a 4-dimensional output in the shape [32, 1, 128, 2] while the target has the shape [32]. If you are working on a binary classification use case using nn.BCEWithLogitsLoss, make sure both outputs have the shape [batch_size, 1]. If not, could you explain your use case a bit more and what the shapes represent, please?

Thank you very much for your answer.
I am trying to make a CNN that classifies images as pathological or healthy.
If the network works I would like to use “class activation map” that requires the last convolutional layer to have average pooling before the fully connected layer.

Here is the dataloader code

transform = transforms.Compose([
transforms.RandomRotation(10),
transforms.RandomResizedCrop(128),
transforms.RandomHorizontalFlip(),
transforms.Grayscale(),
transforms.ToTensor()])

train_set = datasets.ImageFolder(data_dir + ‘/train’, transform = transform)
test_set = datasets.ImageFolder(data_dir + ‘/test’, transform = transform)

train = DataLoader(train_set ,batch_size = 32, shuffle = True)
test = DataLoader(test_set ,batch_size = 32, shuffle = True)

and the neural network code :

class Net(nn.Module):

def init(self):
super(Net, self).init()
self.conv1 = nn.Conv2d(1, 32, 5)
self.pool = nn.MaxPool2d(kernel_size= 2, stride= 2)
self.AvPool = nn.AvgPool2d(kernel_size= 2, stride= 2)
self.conv2 = nn.Conv2d(32, 64, 5)
self.fc1 = nn.Linear(642929, 100)
self.fc2 = nn.Linear(100, 50)
self.fc3 = nn.Linear(50, 2)

def forward(self, x):
    print(x.shape) #torch.Size([32, 1, 128, 128])
    x = self.pool(F.mish(self.conv1(x)))
    print(x.shape) #torch.Size([32, 32, 62, 62])
    x = self.AvPool(F.mish(self.conv2(x)))
    print(x.shape) #torch.Size([32, 64, 29, 29])
    x = x.view(-1, 64*29*29)
    print(x.shape) #torch.Size([107648, 16])
    x = F.mish(self.fc1(x))
    print(x.shape)
    x = F.mish(self.fc2(x))
    x = self.fc3(x)
    return x

Thanks again for your help

The model architecture looks generally alright and should return a 2-dimensional output which doesn’t match your error message, so double check your code.

A minor issue: your flattening seems to be wrong:

    print(x.shape) #torch.Size([32, 64, 29, 29])
    x = x.view(-1, 64*29*29)
    print(x.shape) #torch.Size([107648, 16])

since you are changing the batch size.
Use x = x.view(x.size(0), -1) to flatten the activation and to keep the batch size equal.

Thanks again.
I still have this error…

Here is the entiere code :

import matplotlib.pyplot as plt
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch
import pandas as pd

#def get_data():
data_dir = ‘/Users/kevin/Downloads/OCT2022_ter’

transform = transforms.Compose([
transforms.RandomRotation(10),
transforms.RandomResizedCrop(128),
transforms.RandomHorizontalFlip(),
transforms.Grayscale(),
transforms.ToTensor()])

train_set = datasets.ImageFolder(data_dir + ‘/train’, transform = transform)
test_set = datasets.ImageFolder(data_dir + ‘/test’, transform = transform)

train = DataLoader(train_set ,batch_size = 32, shuffle = True)
test = DataLoader(test_set ,batch_size = 32, shuffle = True)

#return train, test

classes = (‘Control’, ‘Pathos’) # Defining the classes we have
dataiter = iter(train)
images, labels = dataiter.next()
fig, axes = plt.subplots(figsize=(10, 4), ncols=5)
for i in range(5):
ax = axes[i]
ax.imshow(images[i].permute(1, 2, 0))
ax.title.set_text(’ ‘.join(’%5s’ % classes[labels[i]]))
plt.show()

class Net(nn.Module):

def init(self):
super(Net, self).init()
self.conv1 = nn.Conv2d(1, 32, 5)
self.pool = nn.MaxPool2d(kernel_size= 2, stride= 2)
self.AvPool = nn.AvgPool2d(kernel_size= 2, stride= 2)
self.conv2 = nn.Conv2d(32, 64, 5)
self.fc1 = nn.Linear(642929, 100)
self.fc2 = nn.Linear(100, 50)
self.fc3 = nn.Linear(50, 2)

def forward(self, x):
print(x.shape) #torch.Size([32, 1, 128, 128])
x = self.pool(F.mish(self.conv1(x)))
print(x.shape) #torch.Size([32, 32, 62, 62])
x = self.AvPool(F.mish(self.conv2(x)))
print(x.shape) #torch.Size([32, 64, 29, 29])
x = x.view(x.size(0), -1)
print(x.shape) #torch.Size([107648, 16])
x = F.mish(self.fc1(x))
print(x.shape)
x = F.mish(self.fc2(x))
x = self.fc3(x)
return x

model = Net()
learning_rate = 0.01
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
n_epoch = 100

#def train_net(n_epoch): # Training our network
losses =
for epoch in range(n_epoch): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(train, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data

    # zero the parameter gradients
    optimizer.zero_grad()

    # forward + backward + optimize
    
    outputs = model(inputs)
    loss = criterion(outputs, labels)
    loss.backward()
    optimizer.step()

    # print statistics
    losses.append(loss)
    running_loss += loss.item()
    if i % 100 == 99:  # print every 2000 mini-batches
        print('[%d, %5d] loss: %.10f' %
              (epoch + 1, i + 1, running_loss / 2000))
        running_loss = 0.0

plt.plot(losses, label=‘Training loss’)
plt.show()
print(‘Finished Training’)


ValueError Traceback (most recent call last)
/var/folders/7x/f_g9k1797vn0y8xdfwzz10gc0000gn/T/ipykernel_71356/646090411.py in
13
14 outputs = model(inputs)
—> 15 loss = criterion(outputs, labels)
16 loss.backward()
17 optimizer.step()

~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
→ 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = ,

~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
610
611 def forward(self, input: Tensor, target: Tensor) → Tensor:
→ 612 return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
613
614

~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction)
3054 reduction_enum = _Reduction.get_enum(reduction)
3055 if target.size() != input.size():
→ 3056 raise ValueError(
3057 "Using a target size ({}) that is different to the input size ({}) is deprecated. "
3058 “Please ensure they have the same size.”.format(target.size(), input.size())

ValueError: Using a target size (torch.Size([32])) that is different to the input size (torch.Size([32, 2])) is deprecated. Please ensure they have the same size.

Maybe the problem is with Dataloader or my images ? I dont’ understand…

An output size of [batch_size, 2] can be used for a 2-class mutli-class classification with nn.CrossEntropyLoss which would then expect a target in the shape [batch_size] containing class indices in [0, 1]. If you want to use nn.BCEWithLogitsLoss for a binary classification, make sure your model outputs a single logit in [batch_size, 1] and also make sure the target has the same shape.

It seems to work with CrossEntropyLoss
Thank you very much !