Hi! I’m new to pytorch, and I’m trying to figure out how pytorch works.
I want to know when pytorch exactly alloc or dealloc tensors(specially output gradients and weight gradients) during training.
For this work, I’m trying to get logs for allocation/deallocation. I used some hooks with torch.cuda.memory_allocated(), but it’s hard to get exact moment and target tensor.
Is there a way to get logs for VRAM allocation/deallocation? (like vlog in tensorflow) or What do you think is appropriate to achieve my goal?