Hello, everyone! In my project, I need to know the indices of the sampled data points in the training set. But I don’t know how to do this.
I tried an alternative below by setting the shuffle to False, in which I don’t need to know the data indices because the sampled data will simply has the same order as in the training set. But it seems the dataloader still reorders the samples.
sampler = torch.utils.data.DataLoader( trainset, batch_size = args.train_batch, shuffle = False, num_workers = args.workers ) sampler = iter(sampler) for i in range(len(trainloader)): inputs, targets =next(sampler) print(torch.sum(torch.abs( inputs-trainset[i * args.train_batch] )))
If the dataloader has returned data exactly as the data’s order in the training set, this snippet should print 0s. But it prints non-zero values