Problem with data loaded on GPU

Hi everyone.
A newbie issue.

I’m using CrossEntropyLoss with data on gpu but I still getting this error:
RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1).

I think that my primitive way to load data on gpu is making this problem because when I run neural network on data from cpu everything is working correctly.

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

xy = np.loadtxt('.csv',
                        delimiter=',', dtype=np.float32)

x_datai = torch.Tensor(xy[:, 0:-1]).to(device)
y_datai = torch.Tensor(xy[:, [-1]]).to(device)

train_loader =TensorDataset(x_datai,y_datai)


-------------------

for i, data in enumerate(train_loader):
   inputs,target = data
   
   target = target.squeeze(1)
   
   optimizer.zero_grad()
   output = model(inputs).to(device)
   
   loss = criterion(output, target.long())
   loss.backward()
   optimizer.step()

I tried to solve this on my own a long time but it’s too much for me.
Can someone explain me how the dataloader should look to not making this issue?

Thanks for help.

Could you check the shapes of inputs and target?
I guess both might be missing the batch dimension at dim0.
If that’s the case, you could try to apply .unsqueeze(0) on both of them.

Now I getting this:
return torch._C._nn.nll_loss(input, target, weight, size_average, ignore_index, reduce)
RuntimeError: multi-target not supported at c:\pytorch\aten\src\thcunn\generic/ClassNLLCriterion.cu:16

Sorry for the confusion. The target shouldn’t be unsqueezed for nn.CrossEntropyLoss or nn.NLLLoss.

It’s working perfectly!
Thank You!