Hi, I’m having a trouble with the training data have the x_train shape of (7304,20) and y_train of (7304,3). While loading through the model, while calculating the loss between labeled and trained_data, I have above error. Here is the further detail of the code:
num_epochs = 100
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
for epoch in range(num_epochs):
model.train()
for i, batch in enumerate(train_loader):
x = batch["sample"].to(device)
rul = batch["target"].to(device)
#Forward
x = x.unsqueeze(1)
output = model(x)
loss = criterion(output,rul)
#Backward and Optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("Epoch{}/{}.Iter{}/{}.Loss{}").format(epoch+1,num_epochs,iter,num_iters,loss)
And the full error lines:
--------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[70], line 12
10 x = x.unsqueeze(1)
11 output = model(x)
---> 12 loss = criterion(output,rul)
14 #Backward and Optimize
15 #optimizer.zero_grad()
16 #loss.backward()
17 #optimizer.step()
18
19 #print("Epoch{}/{}.Iter{}/{}.Loss{}").format(epoch+1,num_epochs,iter,num_iters,loss)
File ~/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File ~/.local/lib/python3.10/site-packages/torch/nn/modules/loss.py:536, in MSELoss.forward(self, input, target)
535 def forward(self, input: Tensor, target: Tensor) -> Tensor:
--> 536 return F.mse_loss(input, target, reduction=self.reduction)
File ~/.local/lib/python3.10/site-packages/torch/nn/functional.py:3294, in mse_loss(input, target, size_average, reduce, reduction)
3291 if size_average is not None or reduce is not None:
3292 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 3294 expanded_input, expanded_target = torch.broadcast_tensors(input, target)
3295 return torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))
File ~/.local/lib/python3.10/site-packages/torch/functional.py:74, in broadcast_tensors(*tensors)
72 if has_torch_function(tensors):
73 return handle_torch_function(broadcast_tensors, tensors, *tensors)
---> 74 return _VF.broadcast_tensors(tensors)
RuntimeError: The size of tensor a (20) must match the size of tensor b (3) at non-singleton dimension 2
I know the error occurs due to the shape of the x and y, but how can I fix this problem ?