What's the equivalence of theano's inc_subtensor()?


(David Leon) #1

To change the values of a subset elements of a tensor, in theano we have inc_subtensor(), what is the equivalence in pytorch?


(Alban D) #2

Hi,

You should take a look at the set of functions called index_* they allow you to work with sub-tensors.


(David Leon) #3

Yes, I noticed there is torch.index_select() function. However this function returns a new tensor not a view, so if I do

t2 = torch.index_select(t1,  axis, index)
t2 += 1.0

Tensor t1 will stay unchanged. I eventually need t1 to be changed.


#4

you can do standard numpy-like indexing:

Try this:

t1 = torch.randn(10, 5)
t2 = t1[:, 3]
t2.fill_(0)
print(t1)

(David Leon) #5

@smth What if I need not just an integer index but a list of integers, e.g.
I want indexing like this
t1[:,[1,3,4]] += 1.0
However this is not supported by pytorch now, is there another way or I have to use a for-loop?


(Y) #6

I think index_add_ is what you are looking for.


(David Leon) #7

Thanks, that’s exactly what I’m looking for.


(Adam Paszke) #8

And t1[:,[1,3,4]] += 1.0 is implemented, but instead of giving it [1, 3, 4] you need to wrap that in a LongTensor


(Alban D) #9

@david-leon index_add_ is documented here with all the other index_* functions


(David Leon) #10

@albanD Well, this is awkward :sweat: how I missed it … and thanks a lot!

@apaszke I tried with t1[:,torch.LongTensor([1,3,4])] but no luck, error raised as
TypeError: indexing a tensor with an object of type LongTensor. The only supported types are integers, slices, numpy scalars and torch.LongTensor or torch.ByteTensor as the only argument.
My pytorch version is 0.1.10_2


(Adam Paszke) #11

@david-leon yeah sorry for that. It’d need to be indexing along the first dim.


(David Leon) #12

Thanks, to be clear for future readers:
t1[torch.LongTensor([1,3,4])] works, but t1[torch.LongTensor([1,3,4]), :] does not.

And one more question which is related: if I want to do indexing like:
t1[[1,3,0], [1,3,4]]
what is the most efficient way to do this in pytorch? In theano we can do it the same as in numpy, however pythorch does not support this yet.


(Adam Paszke) #13

I think gather should do it


(David Leon) #14

I can’t figure it out with gather, according to its syntax:
torch.gather(input, dim, index, out=None)
gather can only handle one dimensional indexing.

For the time being, I do the indexing via python loop:
a1 = torch.stack([t1[Idx1[i],Idx2[i]] for i in range(3)])
in which
Idx1 = torch.LongTensor([1,3,0]) and Idx2 = torch.LongTensor([1,3,4])
Apparently this is not an efficient nor an elegant way.