Shared Randomness when using SubsetRandomSampler

My simplified use case is as follows. I have two training instances. In the first instance, given a dataset, I create a mask to choose a subset of the data

loader1 = DataLoader(dataset, sampler=SubsetRandomSampler(mask1), batch_size=64)

I create another training instance with the same dataset (but one data point changed/corrupted), and follow the same procedure by creating the loader. How do I ensure that in these two training instances, I generate the same mini batches for each call. I essentially need to measure the divergence between the two model weights when the order in which the data processed remains the same (just one data point is different).