So I’m pretty new to pytorch and have converted a lot of numpy code over without problem. However, it seems that indexed subtraction doesn’t work in the intuitive way it works on numpy. So this operation where I broadcast some indexes over array1 and subtract arrray2

I think the problem occurs due to the successive indexing, which might work on a copy of the tensor.
Could you somehow combine the indexing to just onefor array1?
If you need some help, could you post the shapes of array1 and array2, so that we could have a look?
Currently the second index won’t work, as np.arange(10) and np.arange(5, 10, 1) have different shapes and cannot be summed together.

I should also mention that I tried .sub (and .sub_) and .add (and .add_) without success. And if I convert all these matrices to numpy arrays this broadcast subtraction seem to then work in place as I intended originally.

EDIT: I collapsed index3: to a single value, but the inplace subtraction doesn’t seem to work.

Thanks for the information.
So basically you would like to index twice in dim0?
The first time using index1 and based on this result you would index again in both dimensions using index2+index3?
Is that correct?

Yeah, I should have probably tested this more before I posted, but I’m wondering if any type of in place broadcast subtraction works? Let alone my triple index version.

But perhaps indexing twice as you say - may be the breaking point for this. Perhaps you’re suggesting that I get rid of the double index!?.. Let me try that…

I tried to use your shape information to come up with an example and then just realized, how your indexing is probably supposed to work. Is this version correctly working in numpy?

I think you may be right, this double-indexing version may not work even in numpy.

So this funky way of indexing was done specifically to deal with tensor format, but the original numpy has a single continuous indexing block i.e. [index1,index2+index3]…

I think I need to figure out how to do that in pytorch, if that’s possible at all!?

Maybe we could have a look from the other direction at the problem: given a small sample tensor array1 of shape [3, 10], what would be the result? Alternatively, how did you calculate these indices? Maybe we could simplify the problem a bit.

I think you’ll find the best description in the second example of the numpy advanced indexing doc. np.newaxis broadcasts the array, so that you don’t have to manually repeat it.