In fairseq code, I saw some use of the “out=” argument for tensor operations similar to below. Are there any performance benefits of using “out=” parameter? It seems less natural and less readable. Does it offer performance gain like avoiding generating intermediate results? Or is it just good for specifying the destination tensor to store the result?
c = torch.tensor([])
a = torch.rand(3, 4)
torch.add(a, 3, out=c) # Does this have any advantage over the equivalent way below?
# this is equivalent to the below
c[:] = a + 3
c = torch.empty(12)
a = torch.rand(3, 4)
torch.add(a, 3, out=c.view(3, 4)) # Does this have any advantage over the equivalent way below?
# the above is equivalent to the below
c[:] = (a + 3).view(12)
# or
c.view(3, 4)[:] = a + 3