# Simultaneously multiple block indexing in a torch tensor

Hi,

what is the fastest method to index simultaneously multiple blocks in a torch tensor to modify them?

Here an example: i take the first two rows after every 4 rows and modify them:

``````a=torch.rand(8,8)
a[0:2,:]=2*a[0:2,:]
a[4:6,:]=2*a[4:6,:]
``````

I want to avoid the for loop to reduce the execution time.

Thanks!

``````
In : a=torch.ones(8,8)
...: a[0:2,:]=2*a[0:2,:]
...: a[4:6,:]=2*a[4:6,:]

In : a
Out:
tensor([[2., 2., 2., 2., 2., 2., 2., 2.],
[2., 2., 2., 2., 2., 2., 2., 2.],
[1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1.],
[2., 2., 2., 2., 2., 2., 2., 2.],
[2., 2., 2., 2., 2., 2., 2., 2.],
[1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1.]])

In :
``````

Using a simple timing function (e.g. importing `time` ) you can measure the speed both ways.

right, i have a huge matrix and i want the modify the rows in parallel and to avoid the for loop. So i am looking for a better solution.

i got it!

Instead of using the for loop for the whole matrix, i just have to use it for the block_matrix size

``````a=torch.ones(8,8)
print(a)
block_size=2
for i in range(block_size):
a[i::4]=2*a[i::4]
print(a)
``````