Memory management in PyTorch implementation of multi-processing queues

I have a question for the PyTorch development team.

How is the memory consumed by queues in PyTorch implementation of multi-processing libraries managed?

If you can point me to the relevant piece of code (if available) and/or provide a textual description, I would appreciate it.

@VitalyFedyunin Could you help out here since its a torch.multiprocessing question?

Please check

and

as methods are different for CPU/GPU

generally speaking we are passing storage descriptors and do usage ref counting.

1 Like