Why DistributedSampler older versions have different way of sampling indices?

Current PyTorch core one splits the whole dataset in interleaved fashion:

indices = indices[self.rank:self.total_size:self.num_replicas]

Code in github.com/nvlabs/wetectron (which is derived from older version of Detectron2):
wetectron/distributed.py at 69436c0c6bb77e8dddb6f3f2b5d912f0d1ecd8fb · NVlabs/wetectron · GitHub
indices = indices[offset : offset + self.num_samples]

Why did older Detectron2 use a different way of sampling?