ValueError: Target size (torch.Size([4, 1])) must be the same as input size (torch.Size([16, 1]))

Can someone help me, i could not pass through this error

import torch.optim as optim
import torch.nn.functional as F

Loss function

criterion = nn.BCEWithLogitsLoss()

Optimizer

optimizer = optim.Adam(model.parameters(), lr=0.001)

Define number of training epochs

EPOCHS = 10

train the model

for epoch in range(EPOCHS):
# clear gradients
optimizer.zero_grad()
running_loss = 0
for images, labels in train_loader:
# move data to device
images = images.to(device)
labels = labels.to(device)
labels = labels.view(-1, 1) # reshape labels
labels = torch.round(labels).to(torch.float).view(-1, 1)

    # zero gradients
    optimizer.zero_grad()
    
    # forward pass
    outputs = model(images)
    outputs = outputs.view(-1, 1) #reshape output
    running_loss = criterion(outputs, labels)

    # backward pass
    running_loss.backward()
    # update model parameters
    optimizer.step()

    running_loss += running_loss.item()
    
    # print loss and accuracy
print("epoch:", epoch, "loss:", running_loss.item())
running_loss = 0

# validation loss
test_loss = 0
with torch.no_grad():
    for images, labels in val_loader:
         # move data to device
        images = images.to(device)
        labels = labels.to(device)
        labels = torch.round(labels).to(torch.float).view(-1, 1)
        # forward pass
        outputs = model(images)
        outputs = outputs.view(-1, 1) # reshape output
        test_loss += criterion(outputs, labels)
        test_loss.backward()
        optimizer.step()
        test_loss += test_loss.item()

# print loss and accuracy
print("epoch:", epoch, "loss:", running_loss.item(), "test_loss:", test_loss.item())

evaluate model on test data

with torch.no_grad():
test_loss = 0
for images, labels in test_generator:
# move data to device
images = images.to(device)
labels = labels.to(device)
labels = labels.view(-1, 1) # reshape labels

    # forward pass
    test_outputs = model(images)
    test_loss += criterion(test_outputs, labels)
    
test_loss = test_loss / len(test_generator)
print("Test loss:", test_loss.item())

ValueError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_14840\2968328638.py in
30 outputs = model(images)
31 outputs = outputs.view(-1, 1) #reshape output
—> 32 running_loss = criterion(outputs, labels)
33
34 # backward pass

~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
→ 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []

~\anaconda3\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target)
718
719 def forward(self, input: Tensor, target: Tensor) → Tensor:
→ 720 return F.binary_cross_entropy_with_logits(input, target,
721 self.weight,
722 pos_weight=self.pos_weight,

~\anaconda3\lib\site-packages\torch\nn\functional.py in binary_cross_entropy_with_logits(input, target, weight, size_average, reduce, reduction, pos_weight)
3158
3159 if not (target.size() == input.size()):
→ 3160 raise ValueError(“Target size ({}) must be the same as input size ({})”.format(target.size(), input.size()))
3161
3162 return torch.binary_cross_entropy_with_logits(input, target, weight, pos_weight, reduction_enum)

ValueError: Target size (torch.Size([4, 1])) must be the same as input size (torch.Size([16, 1]))

Before this line, add a print statement:

print(outputs.size(), labels.size())

BCEWithLogitsLoss expects these to be the same size and it seems they are not. Something like this works fine:

loss = nn.BCEWithLogitsLoss()
input = torch.randn(3, requires_grad=True)
target = torch.empty(3).random_(2)
output = loss(input, target)
output.backward()

https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html

Thank you for your reply, I have added the code as

forward pass

    outputs = model(images)
    outputs = outputs.view(-1, 1) #reshape output
    print(outputs.size(), labels.size())
    running_loss = criterion(outputs, labels)

But the error Remains the same. This link to the complete code COMPLETE CODE

A print statement will not fix the error. But it will show you if the sizes don’t match. Scroll above the error message to see what printed before the error occurred.

torch.Size([16, 1]) torch.Size([4, 1])


ValueError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_11832\1478269417.py in
31 outputs = outputs.view(-1, 1) #reshape output
32 print(outputs.size(), labels.size())
—> 33 running_loss = criterion(outputs, labels)
34
35 # backward pass

~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
→ 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []

~\anaconda3\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target)
718
719 def forward(self, input: Tensor, target: Tensor) → Tensor:
→ 720 return F.binary_cross_entropy_with_logits(input, target,
721 self.weight,
722 pos_weight=self.pos_weight,

~\anaconda3\lib\site-packages\torch\nn\functional.py in binary_cross_entropy_with_logits(input, target, weight, size_average, reduce, reduction, pos_weight)
3158
3159 if not (target.size() == input.size()):
→ 3160 raise ValueError(“Target size ({}) must be the same as input size ({})”.format(target.size(), input.size()))
3161
3162 return torch.binary_cross_entropy_with_logits(input, target, weight, pos_weight, reduction_enum)

ValueError: Target size (torch.Size([4, 1])) must be the same as input size (torch.Size([16, 1]))

The model outputs don’t match the label size. It means your model is not giving out the right size.

Please what can i do to make it work. I’m new to pytorch.

Update your model code so it gives the correct output size.

This is the model
import torch.nn as nn
import torch.nn.functional as F

model definition

class Net(nn.Module):
def init(self):
super(Net, self).init()

    #convolutional layers
    self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1)
    self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
    
    self.conv2 = nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1)
    self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
    
    self.conv3 = nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1)
    self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
    
    self.conv4 = nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1)
    self.pool4 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
    
    #dropout layers
    self.dropout1 = nn.Dropout2d(p=0.2)
    self.dropout2 = nn.Dropout2d(p=0.2)
    self.dropout3 = nn.Dropout2d(p=0.2)
    
    # fully connected layers
    self.fc1 = nn.Linear(256*7*7, 256)
    self.fc2 = nn.Linear(256, 1)

    # sigmoid activation
    self.sigmoid = nn.Sigmoid()

def forward(self, x):
    x = self.pool1(F.relu(self.conv1(x)))
    x = self.dropout1(x)
    x = self.pool2(F.relu(self.conv2(x)))
    x = self.dropout2(x)
    x = self.pool3(F.relu(self.conv3(x)))
    x = self.dropout3(x)
    x = self.pool4(F.relu(self.conv4(x)))
    x = x.view(-1, 256*7*7)
    x = F.relu(self.fc1(x))
    x = self.fc2(x)
    x = self.sigmoid(x)
    return x

create an instance of the model

model = Net()
model.to(device)

Your model doesn’t deal with variable input sizes. Each of your Conv2d layers give the size input as output. Each MaxPool2d reduces both dims in half. You have 4 of those.

Say your input image is 32:
[32, 32]/(2**4) = [2, 2]

That leaves 2x2 = 4 values coming out of the final MaxPool2d.

Can you help me with the section to update?

The problem has been identified. There are many ways to fix it. If you built that model, it should be pretty straight forward how to get the size down to 1x1 before the linear layers.

You could add another Conv2d and MaxPool2d, you could put a final AvgPool2d or AdaptiveAvgPool2d, you could increase the input size of your first linear layer by 4, etc.

Thank you once again, this really help. this is how the model is
import torch.nn as nn
import torch.nn.functional as F

model definition

class Net(nn.Module):
def init(self):
super(Net, self).init()
# Add the AdaptiveAvgPool2d layer
self.avgpool = nn.AdaptiveAvgPool2d((1,1))

    #convolutional layers
    self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1)
    self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
    
    self.conv2 = nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1)
    self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
    
    self.conv3 = nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1)
    self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
    
    self.conv4 = nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1)
    self.pool4 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
    
    #self.conv5 = nn.Conv2d(256, 512, kernel_size=3, stride=1, padding=1)
    #self.pool5 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
    
   #dropout layers
    self.dropout1 = nn.Dropout2d(p=0.2)
    self.dropout2 = nn.Dropout2d(p=0.2)
    self.dropout3 = nn.Dropout2d(p=0.2)
    
    # fully connected layers
    self.fc1 = nn.Linear(256, 256)
    self.fc2 = nn.Linear(256, 1)

    # sigmoid activation
    self.sigmoid = nn.Sigmoid()

def forward(self, x):
    x = self.pool1(F.relu(self.conv1(x)))
    x = self.dropout1(x)
    x = self.pool2(F.relu(self.conv2(x)))
    x = self.dropout2(x)
    x = self.pool3(F.relu(self.conv3(x)))
    x = self.dropout3(x)
    x = self.pool4(F.relu(self.conv4(x)))
    x = self.avgpool(x) # apply adaptive average pooling
    x = x.view(-1, 256)
    x = F.relu(self.fc1(x))
    x = self.fc2(x)
    x = self.sigmoid(x)
    return x

create an instance of the model

model = Net()
model.to(device)

The error is no more but it keep on printing
torch.Size([4, 1]) torch.Size([4, 1])
torch.Size([4, 1]) torch.Size([4, 1])
torch.Size([4, 1]) torch.Size([4, 1])
torch.Size([4, 1]) torch.Size([4, 1])
torch.Size([4, 1]) torch.Size([4, 1])
torch.Size([4, 1]) torch.Size([4, 1])
torch.Size([4, 1]) torch.Size([4, 1])
torch.Size([4, 1]) torch.Size([4, 1])
torch.Size([4, 1]) torch.Size([4, 1])
torch.Size([4, 1]) torch.Size([4, 1])
torch.Size([4, 1]) torch.Size([4,

Bravo!

Now you can delete that print statement you added earlier.

I just remember that, removed it and re-run the code, but it yielded to an error
epoch: 0 loss: -4.0401434898376465


RuntimeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_10448\3620899133.py in
56 outputs = outputs.view(-1, 1) # reshape output
57 test_loss += criterion(outputs, labels)
—> 58 test_loss.backward()
59 optimizer.step()
60 test_loss += test_loss.item()

~\anaconda3\lib\site-packages\torch_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
486 inputs=inputs,
487 )
→ 488 torch.autograd.backward(
489 self, gradient, retain_graph, create_graph, inputs=inputs
490 )

~\anaconda3\lib\site-packages\torch\autograd_init_.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
195 # some Python versions print out the first line of a multi-line function
196 # calls in the traceback and some print out the last line
→ 197 Variable.execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
198 tensors, grad_tensors
, retain_graph, create_graph, inputs,
199 allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Please read here:

https://pytorch.org/docs/stable/notes/autograd.html#in-place-operations-with-autograd

And if you’re not clear what an “in-place operation” is, read here first:

https://discuss.pytorch.org/t/what-is-in-place-operation

1 Like

Fixed. Thank you very much. How can I optimize the parameters using GWO?

1 Like