Not using loss.item() incread RAM utilization

What should happen on not using loss.item() for logging the loss values.
I was using AverageMeter (from imagenet tutorial) to store losses and I forgot using loss.item(). I believed it should have increased my GPU util and caused error.

But instead somethin strange happened. It started increasing RAM utilization and literally froze my system. Why did this happen (and not what I expected happened!!)?

Tensors come with metadata (information about stride, size, type, …) in CPU memory and you probably kept a lot of these around, possibly also some graphs to be used in gradient calculations. When you only have a scalar value, the CPU memory usage probably exceeds the GPU memory use by quite a margin, so if you have a small machine with a sizeable GPU…

Best regards