I am experiencing weird behavior while dealing with the following scenario. tne is the batch size, while N is the total number of possible locations for each example. Each example has exactly 24 scalar outputs, where their locations are stored in dof tensor (size of dof is (tne X 24)). Fint_e has a size of tneX24 (i.e., the 24 outputs for each example). I am trying to construct a large tensor, which has a size of tne X N. When I do the following, it fills in the wrong manner. Any advice?
Fint_MAT = torch.zeros((tne,N)) Fint_MAT[:,dof[:,:24]] = Fint_e[:,:24]
The dof tensor, which has the size of batch size X 24, has different indices for each example, but each example has in total 24 indices.
dof[0,:] = 0, 1, 6, 9, … (24 in total)
dof[1,:] = 1,100, 151, 300,… (24 in total)
Any hint would be appreciated.