# Batching most of current operations

Sometimes, current operations tend to hardly meet our needs as they are not ‘vectorized enough’.
For example, here I have to `tensor.eq(float)` to get each `cur_sp_mask`. So I have to use `for-loop` to enumerate each blob.

`````` import torch

bz = 2
a = (torch.rand(bz, 3, 4)*4).int()
index = torch.tensor([[0, 1, 0, 2],[0, 1, 2, 3], [1, 2, 2, 3]])

a = a.view(bz, -1)
index_vec = index.view(-1)

for i in range(bz):
max_index = torch.max(index_vec)
a_vec = a[i]
rec = torch.zeros_like(a_vec.float())
# I have to enumerate each blob
for j in range(max_index + 1):
rec = rec.view(3, 4)

print('a[i]')
print(a_vec.view(3,4))
print('index')
print(index.view(3,4))
print('result')
print(rec)
print('------')
``````

Another example, I want to create a list of tensors, each of which can be created using `torch.arange`:

``````# Something like this:
#
import torch
x= torch.tensor([20, 53, 98]).long()
y= torch.tensor([110, 120, 132]).long()

# now I have to
a = []
for i in range(3):
a.append(torch.arange(x[i], y[i]))

# However, I hope something like this can be
# more efficient than `for-loop`.
a = torch.arange(x, y)
# a is a list full of tensors
``````

However, `torch.arange` does not support “batching” operation.

It is definetely huge work to make most operations powerful enough to support so advanced “batching”. However, maybe it is possible:

``````import torch
x= torch.tensor([20, 53, 98]).long()
y= torch.tensor([110, 120, 132]).long()

warp_x = warp_op(x)
warp_y = warp_op(y)

# When tensors are warpped,
# then they can be more efficient executed than `for-loop`.
a = torch.arange(warp_x , warp_y)
# a is a list full of tensors
``````

You suggestion seems to come close to what `vmap` could do and there are a few proofs of concept (#32558, #32836) as well as an open PR with more information regarding the roadmap.