Shared memory leak while using multiprocessing

Hi I have set the torch multiprocessing strategy to file_system for a multiprocessing DataLoader. After a certain number of epochs (not the same every time I run) I get one of the workers fails because of insufficient shared memory. While monitoring /dev/shm I notice an increase at the end of each epoch, hence there are some tensors that are not freed. Is there a known issue about this? Or a way to identify where the memory leak could be happening?