Apply a function (similar to map) on a tensor?

Hello

I have a vector : (v1,v2,v3,…,vn) = v

For which I have a function f(v_i) = out_i

Is there a simple and clean way to define F such that :
F(v) = (f(v_1),f(v_2),f(v_3),…,f(v_n)) ?

Without using for loops, possibly by parallelizing the computation? As each instance instance of the function f are independant from each other.

Thanks!

1 Like

Hi Lam!

As far as I am aware, pytorch does not have this kind of “map”
function.

However, pytorch supports many different functions that act
element-wise on tensors (arithmetic, cos(), log(), etc.). If you
can rewrite your function using element-wise torch tensor
operations, your composite function will also act element-wise,
and will do what you want.

Good luck.

K. Frank

Thanks, @KFrank!

Numpy provides a way to vectorize a function. Examples for the same makes it very clear and easy to understand. I am not able to find a similar thing in PyTorch. A reference to any of the following would be really helpful:

  • How to use map() with PyTorch tensors?
  • Is there any API like np.vectorize?

PS: We want to apply a function f on each element of list of tensors.

Thanks!

5 Likes

For other readers,
I found this function that satisfy this need from this link:

tensor = torch.tensor([[3,5,1,2],[3,1,5,3],[7,5,8,3]],dtype=torch.float)
print(tensor)
tensor.apply_(lambda x: (x+0.2))
print(tensor)
3 Likes

Is there a better approach? Because it comes with the following warning,

This function only works with CPU tensors and should not be used in code sections that require high performance.

1 Like

functorch.vmap implements this behavior (but is still in beta)

An example after installing functorch :

batch_size, feature_size = 3, 5
v = torch.randn(batch_size, feature_size)

def simple_row_func(feature_vec):
    # remove mean
    return feature_vec - feature_vec.mean()

result = functorch.vmap(simple_row_func)(v)

# equivalent to
result = v - v.mean(1, keepdim=True)