Does .item() affect performance?

Hi there, as far I know we can fill our ram by not releasing the pointer to the computational graph, for example-

losses = [] on batch...

The problem above is that loss still has a copy of the graph!!! So its recommended to do as follows in order to release it-

# not losses.append(loss)

My question is if we call .item() on loss for let’s say 2 to 3 times at each batch training step, then is it gonna affect the performance? Or are there any other issues that it might have?

.item() ensures that you append only the float values to the list rather the tensor itself. You are basically converting a single element tensor value to a python number. This should not affect the performance in any way.

as @charan_Vjy mentioned, item() is pretty inexpensive. Are you running into some situation where it is affecting the performance?

Thanks @charan_Vjy, @richard. I just read somewhere that .item() has to do something related to the computational graph, so I thought it might affect performance and memory in some way. I ran some tests and found no difference as such (at least with small models like AlexNet).

.item() turns a Tensor into a Python number. Autograd can only keep track of computations on Tensors, so using the result of .item() effectively creates a new value that is not a part of the computation graph.