Hi Pytorch community,
I am experiencing weird behavior while dealing with the following scenario. tne is the batch size, while N is the total number of possible locations for each example. Each example has exactly 24 scalar outputs, where their locations are stored in dof tensor (size of dof is (tne X 24)). Fint_e has a size of tneX24 (i.e., the 24 outputs for each example). I am trying to construct a large tensor, which has a size of tne X N. When I do the following, it fills in the wrong manner.
Fint_MAT = torch.zeros((tne,N))
Fint_MAT[:,dof] = Fint_e
I include a reproducible example to give a better illustration of the issue.
tne = 3
N = 48
Fint_MAT = torch.zeros((tne,N))
Fint_e = torch.randn((tne, 24))
v1 = torch.arange(24).unsqueeze(0)
v2 = torch.arange(12, 36).unsqueeze(0)
v3 = torch.arange(24, 48).unsqueeze(0)
dof = torch.cat((v1,v2,v3), axis=0).long()
Fint_MAT[:,dof] = Fint_e
Each row would have 24 nonzeros and 24 zeros, while they are different from one row to another. The columns of the nonzeros are stated in the tensor dof for each corresponding row. However, what I get is that all the 48 entries are nonzero.