Hello.
I would like to do this operation below without the pythonic for loop in plain pytorch. How do I go about?
torch.stack([v[f] for v,f in zip(verts_padded, faces_padded)])
Where, verts_padded
is of shape (B, N, 3) faces_padded
is of shape (B,M,3).
I see a related question from 3yrs ago but unanswered : Indexing 3D Tensor using 3D Index - #2 by HuynhLam
Thank you
Given your shapes and the posted for loop, this would work:
B, N, M = 4, 5, 6
verts_padded = torch.randn(B, N, 3)
faces_padded = torch.randint(0, B, (B, M, 3))
out = torch.stack([v[f] for v,f in zip(verts_padded, faces_padded)])
res = verts_padded[torch.arange(B).unsqueeze(1).unsqueeze(2), faces_padded]
print((out == res).all())
> tensor(True)
Hello @ptrblck sorry for my late reply, I did not receive notification about your response.
Your solution works, but it’s still not intuitive for me regarding the unsqueeze that you do. Can you please elaborate on it? Thanks
The numpy - Advanced Indexing docs can explain it better than I could, so take a look at the linked section and in particular into the usage of np.newaxis
:
rows = np.array([0, 3], dtype=np.intp)
columns = np.array([0, 2], dtype=np.intp)
rows[:, np.newaxis]
array([[0],
[3]])
x[rows[:, np.newaxis], columns]
array([[ 0, 2],
[ 9, 11]])
and the comparison to the indexing approach without broadcasting (previous code snippet).
1 Like