Indices for different columns of a tensor

Hi,

I am experiencing weird behavior while dealing with the following scenario. tne is the batch size, while N is the total number of possible locations for each example. Each example has exactly 24 scalar outputs, where their locations are stored in dof tensor (size of dof is (tne X 24)). Fint_e has a size of tneX24 (i.e., the 24 outputs for each example). I am trying to construct a large tensor, which has a size of tne X N. When I do the following, it fills in the wrong manner. Any advice?

Fint_MAT        = torch.zeros((tne,N))
Fint_MAT[:,dof[:,:24]]  = Fint_e[:,:24]

The dof tensor, which has the size of batch size X 24, has different indices for each example, but each example has in total 24 indices.

For instance,
dof[0,:] = 0, 1, 6, 9, … (24 in total)
dof[1,:] = 1,100, 151, 300,… (24 in total)

Any hint would be appreciated.

1 Like

Array-based indexing beyond the first dimension does not work in Pytorch like it does in Numpy. Use torch.gather. Example here

1 Like

Thanks! It seems that it is along the lines of what I am trying to do, but still I am not able to make it work for my case.

It seems that gather cannot make what I need, as the dof tensor, which has the size of batch size X 24, has different indices for each example, but each example has in total 24 indices.

For instance,
dof[0,:] = 0, 1, 6, 9, … (24 in total)
dof[1,:] = 1,100, 151, 300,… (24 in total)

Any hint would be appreciated.