How to train a model with a subset of the dataset in a subset of epochs

Hi, I have 25000 images dataset for train model, and I should train for 50000 epochs.
I want to train this model with 1000 random images for each 10 epochs in other words, for 10 epoch my model run for randomly 1000 image in continue, in the next 10 epoch my model run for randomly 1000 image. How can I do that? What changes should I make in which part of source code?
I attach source code that I want to add this condition.
It is worth mentioning that I should set only batch=2 and batch=3, 4,… not work so I have a limit of batch_size.

for load collate function:

def collate_fn(data_bunch):

  A function for the dataloader to return a batch dict of given keys

  data_bunch: List of dictionary

  dict_data_bunch = {}

  for i in data_bunch:
    for (key, value) in i.items():
      if key not in dict_data_bunch:
        dict_data_bunch[key] = []

  for key in list(dict_data_bunch.keys()):
      dict_data_bunch[key] = torch.stack(dict_data_bunch[key], axis = 0)

  if 'img' in dict_data_bunch:
    ## Pre-processing for ViT
    dict_data_bunch['img'] = vit_feat_extract(list(dict_data_bunch['img']),return_tensors = 'pt')['pixel_values']

  return dict_data_bunch

for load dataset:

class DataModule(pl.LightningDataModule):

  def __init__(self, train_dataset, val_dataset,  batch_size = 1):

    super(DataModule, self).__init__()
    self.train_dataset = train_dataset
    self.val_dataset = val_dataset
    self.batch_size = batch_size

  def train_dataloader(self):
    return DataLoader(self.train_dataset, batch_size = self.batch_size, 
                      collate_fn = collate_fn, shuffle = True, num_workers = 2, pin_memory = True)
  def val_dataloader(self):
    return DataLoader(self.val_dataset, batch_size = self.batch_size,
                    collate_fn = collate_fn, shuffle = False, num_workers = 2, pin_memory = True)