Hi there, as far I know we can fill our ram by not releasing the pointer to the computational graph, for example-
losses =  #...training on batch... losses.append(loss)
The problem above is that loss still has a copy of the graph!!! So its recommended to do as follows in order to release it-
losses.append(loss.item()) # not losses.append(loss)
My question is if we call
.item() on loss for let’s say 2 to 3 times at each batch training step, then is it gonna affect the performance? Or are there any other issues that it might have?