ValueError: Using a target size (torch.Size([32])) that is different to the input size (torch.Size([32, 2])) is deprecated. Please ensure they have the same size

This is the Train and Test Code where i guess i am getting the error please help me figuring out this.

Train and Test Engine

from tqdm.auto import tqdm

def train_step(model : torch.nn,
dataloader: torch.utils.data.DataLoader,
loss_fn: torch.nn.Module,
optimizer: torch.optim.Optimizer,
device:torch.device):

model.train()
train_loss, train_acc = 0 , 0

for batch, (X,y) in enumerate(dataloader):
X,y = X.to(device), y.to(device)
# feed forward
y_pred = model(X)
# calculate loss
loss = loss_fn(y_pred, y)
train_loss += loss.item()
# Optimizer
optimizer.zero_grad()
# loss backward(backpropagation)
loss.backward()
optimizer.step()
# calculate accuracy
y_pred_class = torch.argmax(torch.softmax(y_pred, dim = 1), dim = 1)
train_acc += (y_pred_class == y).sum().item() / len(y_pred)

train_loss = train_loss / len(dataloader)
train_acc = train_acc / len(dataloader)
return train_loss, train_acc

def test_step(model: torch.nn.Module,
dataloader: torch.utils.data.DataLoader,
loss_fn: torch.nn.Module,
device: torch.device):
model.eval()
test_loss, test_acc = 0 , 0

with torch.inference_mode():
for batch, (X , y) in enumerate(dataloader):
X,y = X.to(device) , y.to(device)
# feed forward
test_pred_logits = model(X)
# calculate loss
loss = loss_fn(test_pred_logits, y)
test_loss += loss.item()
# calculate acccuracy
test_pred_labels = test_pred_logits.argmax(dim=1)
test_acc += ((test_pred_labels == y).sum().item()/len(test_pred_labels))

test_loss = test_loss / len(dataloader)
test_acc = test_acc / len(dataloader)
return test_loss, test_acc

def train(model: torch.nn.Module,
train_dataloader: torch.utils.data.DataLoader,
test_dataloader: torch.utils.data.DataLoader,
optimizer: torch.optim.Optimizer,
loss_fn: torch.nn.Module,
epochs: int,
device: torch.device):
results = {“train_loss”:,
“train_acc”:,
“test_loss”:,
“test_acc”:
}

model.to(device)

for epoch in tqdm(range(epochs)):
train_loss, train_acc = train_step(model = model,
dataloader = train_dataloader,
loss_fn = loss_fn,
optimizer = optimizer,
device = device)
test_loss, test_acc = test_step(model = model,
dataloader = test_dataloader,
loss_fn = loss_fn,
device = device)
# print results epoch
print(f"Epoch: {epoch+1} | train_acc: {train_acc:.4f} | train_loss: {train_loss:.4f} | test_acc: {test_acc:.4f} | test_loss: {test_loss:.4f}")
# update results
results[“train_acc”].append(train_acc)
results[“train_loss”].append(train_loss)
results[“test_acc”].append(test_acc)
results[“test_loss”].append(test_loss)

return results

Hi Tejas!

You are most likely passing an input and target of differing sizes to
BCELoss. (Similar issues arise for the same reason with other losses
such as BCEWithLogitsLoss and MSELoss, but you would get slightly
different error messages.)

(If you do this, broadcasting can occur under the hood of the loss function,
leading to unexpectedly incorrect results.)

Make sure that you understand the semantics of BCELoss and adjust your
code accordingly.

If you are still having issues, please make a super-simplified test case – use
just a randomly-generated input and target and pass them directly to your
loss function – and post a fully-self-contained, runnable script that illustrates
your issue together with its output.

As an aside, for reasons of numerical stability, don’t use BCELoss – use
BCEWithLogitsLoss instead, adjusting your code as necessary (presumably
by removing a sigmoid() at the end of your model).

Best.

K. Frank

I applied all above changes but still Don’t Know where getting error

Hi Tejas!

I don’t know either.

Could you post a truly-minimal, fully-self-contained, runnable script that
reproduces your issue?

Best.

K. Frank