Dataloader for replay buffer

I’m trying to build an efficient optimization process of Alpha Zero. In my setting, and the optimization process reads the randomly selected rows of the h5py buffer, which is asynchronously updated by another process, and train the model on this minibatch of randomly chosen rows. Since the optimization and this reading take the same amount of time, I want to use Dataloader for parallelization.

Given the dataset that is dynamically updated and changing in the size (until reaching to a certain threshold), is there anything I should be careful? The getitem() of my custom Dataset opens the h5py file and outputs the row of the specified index. I want a batch consisting of 256 rows, so I assume the following sampler is what is needed:

BatchSampler(RandomSampler(dataset), batch_size=256)

Is this compatible with the dynamic nature of my dataset?