Avoid the workers re-start when the new epoch starts

Hi all,
As far as I know, when we are training with multiple epochs and workers > 0, the workers of dataloader will re-strart during the epoch changes.
Since the open the new workers in each epoch is time consuming, more importantly, in my machine(run pytorch with srun command), the program always fall in deadlock when the epoch changes, but when I set the workers=0, everythin is fine.
So is there any methods to avoid to re-start the workers of dataloader in each epoch?
Many thanks!