DataLoader gives double instead of float?

The following reproduces this.

import numpy as np

import torch
import torch.utils.data


X_train = np.random.uniform(-1, 1, (1000,11)).astype(np.float32)
Y_train = np.hstack((np.zeros(500), np.ones(500))).astype(np.float32)

X_train = torch.from_numpy(X_train)
Y_train = torch.from_numpy(Y_train)


print X_train
print Y_train


train = torch.utils.data.TensorDataset(X_train, Y_train)
trainloader = torch.utils.data.DataLoader(train, batch_size=128, shuffle=True)


train_iter = iter(trainloader)
data = train_iter.next()
x, y = data		
print x
print y

[torch.FloatTensor of size 1000x11]
[torch.FloatTensor of size 1000]
[torch.FloatTensor of size 128x11]
[torch.DoubleTensor of size 128]

Maybe this is intentional but I can’t figure out why make a float into a double in that context, and why only the second one.