Hi!
For my experience, when training a Dataset with Pytorch, the CPU occupied memory will keep stable. Is that because the DataLoader automatically deallocates the CPU memory occupied by the previous batch of torch tensors? May I ask how it happened? Where can I know the more details about the DataLoader?