Looping through a tensor

Suppose I have a tensor A of size (m, n). To loop through each row of this tensor, what I did was:

for row in A:
do something

But I saw many people did:

for row in A.split(1):
do something

Is there any difference between two methods? Is there a memory leak in the first method?

1 Like

if A is a tensor, they’re both fine; if A is a Variable the second is better because it uses fewer autograd ops.