DataLoader gives double instead of float?

X_train = torch.from_numpy(X_train)
Y_train = torch.from_numpy(Y_train)
X_test = torch.from_numpy(X_test)
Y_test = torch.from_numpy(Y_test)

[all dtypes torch.FloatTensor confirmed]

train = torch.utils.data.TensorDataset(X_train, Y_train)
trainloader = torch.utils.data.DataLoader(train, batch_size=BATCH_SIZE, shuffle=True)


train_iter = iter(trainloader)
data = train_iter.next()
x, y = data		
print y

And I get torch.DoubleTensor

x is torch.FloatTensor

6 Likes

I can’t reproduce your issue. To resolve a problem we need a self-contained snippet that we can run.

The following reproduces this.

import numpy as np

import torch
import torch.utils.data


X_train = np.random.uniform(-1, 1, (1000,11)).astype(np.float32)
Y_train = np.hstack((np.zeros(500), np.ones(500))).astype(np.float32)

X_train = torch.from_numpy(X_train)
Y_train = torch.from_numpy(Y_train)


print X_train
print Y_train


train = torch.utils.data.TensorDataset(X_train, Y_train)
trainloader = torch.utils.data.DataLoader(train, batch_size=128, shuffle=True)


train_iter = iter(trainloader)
data = train_iter.next()
x, y = data		
print x
print y

[torch.FloatTensor of size 1000x11]
[torch.FloatTensor of size 1000]
[torch.FloatTensor of size 128x11]
[torch.DoubleTensor of size 128]

Maybe this is intentional but I can’t figure out why make a float into a double in that context, and why only the second one.

Hi,
The problem comes from the fact that your Y_train is a 1D tensor and thus when the batch is created, its stacking plain numbers (creating a double tensor to have best precision)
Reshaping your Y_train to a 2D tensor solved the problem:

Y_train = torch.from_numpy(Y_train).view(-1, 1)

@apaszke changing this line with:

 return self.data_tensor.narrow(0, index, 1), self.target_tensor.narrow(0, index, 1)

should solve the issue by always returning Tensor and not numbers. Would this break something I’m not aware of? (Let me know if you want me to send a PR for that)

2 Likes

copy_ doesn’t care if it gets data in shape (batch) or (batch,) ? A quick test shows no changes in loss behavior.

input = Variable(torch.FloatTensor(BATCH_SIZE, dims).cuda())
label = Variable(torch.FloatTensor(BATCH_SIZE).cuda())

x, y = train_iter.next()
input.data.resize_(x.size()).copy_(x)
label.data.resize_(x.size(0)).copy_(y)

No copy_ won’t care: http://pytorch.org/docs/tensors.html#torch.Tensor.copy_ it juste requires the two tensor to have the same number of elements !