[SOLVED] Indexed/broadcast subtraction over tensors doesn't work like in numpy


So I’m pretty new to pytorch and have converted a lot of numpy code over without problem. However, it seems that indexed subtraction doesn’t work in the intuitive way it works on numpy. So this operation where I broadcast some indexes over array1 and subtract arrray2

array1[np.arange(3)][np.arange(10)+np.arange(5,10,1)] -= array2

Does not change array1.

What am doing wrong? Or is this not possible currently in pytorch?


I think the problem occurs due to the successive indexing, which might work on a copy of the tensor.
Could you somehow combine the indexing to just onefor array1?
If you need some help, could you post the shapes of array1 and array2, so that we could have a look?
Currently the second index won’t work, as np.arange(10) and np.arange(5, 10, 1) have different shapes and cannot be summed together.

Hi ptrblck

Ok, here is an example of the sizes:

array1: [3000, 10000]
index1: [369]
index2: [121]
index3: [3, 1]
array2: [369,1,121]


If I print the left-hand-side and right hand side I get this

array1[index1][:,index2+index3]: torch.Size([369, 3, 121])


array2: torch.Size([369, 1, 121])

I should also mention that I tried .sub (and .sub_) and .add (and .add_) without success. And if I convert all these matrices to numpy arrays this broadcast subtraction seem to then work in place as I intended originally.

EDIT: I collapsed index3: to a single value, but the inplace subtraction doesn’t seem to work.

Thanks for the information.
So basically you would like to index twice in dim0?
The first time using index1 and based on this result you would index again in both dimensions using index2+index3?
Is that correct?

Yeah, I should have probably tested this more before I posted, but I’m wondering if any type of in place broadcast subtraction works? Let alone my triple index version.

But perhaps indexing twice as you say - may be the breaking point for this. Perhaps you’re suggesting that I get rid of the double index!?.. Let me try that…

I tried to use your shape information to come up with an example and then just realized, how your indexing is probably supposed to work. Is this version correctly working in numpy?

I think you may be right, this double-indexing version may not work even in numpy.

So this funky way of indexing was done specifically to deal with tensor format, but the original numpy has a single continuous indexing block i.e. [index1,index2+index3]…

I think I need to figure out how to do that in pytorch, if that’s possible at all!?

EDIT: Thanks for pointing this out!!

Maybe we could have a look from the other direction at the problem: given a small sample tensor array1 of shape [3, 10], what would be the result? Alternatively, how did you calculate these indices? Maybe we could simplify the problem a bit.

Ok. So in place subtraction does work for a simpler indexing version of the above. So this works in pytorch:


The question is how to add extra dimensions so I can simultaneously subtract array2 from multiple index2 locations.

index2 can have multiple indices:

x = torch.zeros(10, 10)
idx1 = torch.tensor([0, 1, 2])
idx2 = torch.tensor([3, 4, 5])
x[idx1, idx2] -= 1.

Ok, so I think this works now, but I don’t understand why.

> array1[index1[:,np.newaxis], index2 + index3[:,np.newaxis]]-= array2

If you understand why this works, maybe you can explain it and we can mark this as solved for others.

I’m not even sure if I had a pytorch problem, though as I said seems numpy has an easier time doing this broadcasting.

Thanks so much for your help.

I think you’ll find the best description in the second example of the numpy advanced indexing doc. np.newaxis broadcasts the array, so that you don’t have to manually repeat it.