Distributed data sampler state saving

Hi, I have a very large dataset, I want to be able to stop in a middle of an epoch and continue training from the place I have stopped.

There’s any way to save torch.utils.data.DistributedSampler state ? and then load that state ?