I would like to restore my Dataloader similar to the model/scheduler/optimizer with the “state_dict”. Given a special circumstance I may want to pause my training inside a epoch I would like to restore my training right where it stopped. However, the Dataloader has no state_dict unlike other pytorch classes. I use “islice” from the “itertools” library to continue my training from a given step in the epoch. Unfortunately I can not find any way to set the iter list inside the sampler of the Dataloader. This is not an issue if my data is iterated sequentially. Thou for shuffled iteration this is not the case. Inside the “RandomSampler” function “_iter_(self)” the index list is set like this: iter(torch.randperm(n).tolist()) . If this list would become part of the sampler I would be able to get and set it. My question is whether there is any way which I could not find to restore the iter index list of the sampler? Else you might want to consider and discuss my proposal to make the “torch.randperm(n).tolist()” list a part of the sampler self.