How can I cache elements of a element-wise vectorised function?

I have a function fn that takes a number a and returns a new number b. I have vectorised it into a second function some_fn that takes as input a tensor, performs fn element-wise on each element in the tensor, and returns a tensor of the same size.

As an example:

def some_fn(x): 
    # some vectorised element-wise transformation on the tensor `x` here that takes a while
    # and returns a tensor `result` of the same size. Each element of `x` is independent of 
    # each other. 
    return result 

tensor1 = torch.tensor([1,2,3,4,5])
tensor2 = torch.tensor([3,4,5,6,7])

result1 = some_fn(tensor1)  
result2 = some_fn(tensor2)

print(result1)  # e.g. 15, 23, 39, 42, 54
print(result2)  # e.g. 39, 42, 54, 60, 77 

In the above example there is some overlap in tensor1 and tensor2 of [3,4,5]. These particular numbers will give the same result each time when passed through some_fn and I would like to cache these results so they don’t have to be calculated again (because they are expensive to calculate).

In addition, they may be stored on the GPU (although I’m not sure if that makes a difference or not).

Any help appreciated.

Well, if the indices are “small numbers”, you could make a caching tensor.
But if your code is parallelized and you need to compute some indices anyways, the speed gains might be limited and it’ll be a lot of bookkeeping to get this right.

Best regards

Thomas