Hello, I was just wondering why in PyTorch there is no tool to map functions to
tensor elements, something like tf.map_fn
or tf.while_loop.
Looping big tensors with native python loops is very slow because each
iteration needs CPU computations. Having a tool like this, wouldn’t make everything easier?
If the functions you give to these tools are also python function, then they will be slow as well. The iteration over the loop won’t be much slower than the body of the loop right?
The reason they exist I think is that tensorflow does not allow you to use python for loops (in tf1 at least) and so you need these special constructs to be able to do loops. In pytorch, you can simply use regular python code, you don’t need these special constructs.