Are tensor method call duplications unnecessarily inefficient?

I inherited someone’s code which has usages like the following all over the place:

        a = colors[idxs.long()]
        b = coords[idxs.long()]

        c = foo1(offsets.cuda())
        d = foo2(offsets.cuda())

In the first two lines, call to idxs.long() gets repeated. In the other two lines, call to “offsets.cuda()” gets repeated.

Is it more efficient to assign the results of those calls to some temporary variables and then use them instead of replicating the calls, or, does some kind of internal optimization take care of such cases anyway?


I would avoid repeating transformations and transforms the tensors once.
Often you also don’t need the same tensor in different dtypes or devices, so I’m wondering if e.g. offsets is used anywhere later on the CPU.