Hello,
I’m not sure how to best formulate this issue - I have a model that does something like this:
def forward(self, xyz): # xyz is (B, 3, N)
# a bunch of stuff that extract features
# ....
x = self.drop1(F.relu(self.bn1(self.conv1(l0_points)))) # x: (B, 128, N) N= number of xyz points
Usually this would end in a semantic segmentation, but I’m not interested in that - I’m actually interesting in picking a unique subset of xyz with exactly a predetermined num_points
- so the output would be shaped like (B, 3, num_points)
.
I’m interested in using the learned extracted features, but I’m not sure how I can pick the subset of the points without disconnecting the gradient. One example of something I tried was:
# self.weights = nn.Conv1d(128, 1, 1)
scores = self.weights(x) # # Shape: [B, 1, N]
scores = torch.softmax(weights, dim=-1)
_, top_indices = torch.topk(weights.squeeze(1), self.num_points, dim=-1) # num points is the size of the subset
Using the indices I can then pick the subset, but this obviously disconnects the computational graph… Is there a way to sort of “force” the learned features to translate into a unique, fix-sized subset of the input features?
I hope I make sense. At the end, I expect to pass model(xyz)
and get a unique hard subset of xyz
shaped like (B, 3, num_points)
.
Thank you for any tips