Strange bug with numpy conversion and back

Kind of weird bug. if i do:

train_loader = torch.utils.data.DataLoader(trainset_imoprt, batch_size=200, shuffle=True)
semi_loader = torch.utils.data.DataLoader(trainunl_imoprt, batch_size=200, shuffle=True)
valid_loader = torch.utils.data.DataLoader(validset_import, batch_size=200, shuffle=True)

features = train_loader.dataset.train_data.numpy()
labels = train_loader.dataset.train_labels.numpy()

img = features
img = img.astype('float32')
lab = labels

img, lab = torch.from_numpy(img), torch.from_numpy(lab)

train = torch.utils.data.TensorDataset(img.unsqueeze(1), lab)
train_loader = torch.utils.data.DataLoader(train, batch_size=64, shuffle=False)

The same model that gives me 96% on just the train_loader now dissolves to random guessing (for MNIST - 10% accuracy). Is there something I’m doing wrong?

Maybe it’s because of the different batch size or shuffle? Also, I don’t understand why you have to unsqueeze a dimension that wasn’t used by the first train_loader.