Strange bug with numpy conversion and back

Kind of weird bug. if i do:

train_loader =, batch_size=200, shuffle=True)
semi_loader =, batch_size=200, shuffle=True)
valid_loader =, batch_size=200, shuffle=True)

features = train_loader.dataset.train_data.numpy()
labels = train_loader.dataset.train_labels.numpy()

img = features
img = img.astype('float32')
lab = labels

img, lab = torch.from_numpy(img), torch.from_numpy(lab)

train =, lab)
train_loader =, batch_size=64, shuffle=False)

The same model that gives me 96% on just the train_loader now dissolves to random guessing (for MNIST - 10% accuracy). Is there something I’m doing wrong?

Maybe it’s because of the different batch size or shuffle? Also, I don’t understand why you have to unsqueeze a dimension that wasn’t used by the first train_loader.