if this is the wrong place to post about this please let me know. I wasn’t sure which category would fit best.

Hey everyone,

(Disclaimer: I don’t really have any background in machine learning and am approaching this from a visual effects TD perspective… so bear with me ^^)

I’m currently trying to develop a mesh fitting algorithm to be able to morph between two 3d meshes with different topologies using the chamfer function in pytorch3d.

I followed the tutorials provided and managed to set everything up in my 3d software (Houdini). I was able to successfully morph between different meshes.

I now want to define certain key points / features which appear in the source as well as target mesh to be able to create a cleaner transition. (e.g. match eyes to eyes, nose to nose when fitting two different heads)
I found the “loss.chamfer_distance” function takes weights as an argument but I don"t understand how to use them or whether this is even useful for my problem. Can someone explain what exactly this is supposed to do:

Explanation from pytorch3d docs:

weights – Optional FloatTensor of shape (N,) giving weights for batch elements for reduction operation.

I also wondered if it was possible to fit the mesh while maintaining topological symmetry of the source mesh or how I would go about implementing this.

I’m very thankful for any pointers on how to approach any of my two problems!

Take this explanation with a grain of salt, as I’m neither deeply familiar with your use case nor with PyTorch3D.

The description:

weights – Optional FloatTensor of shape (N,) giving weights for batch elements for reduction operation.

sounds ambiguous, as apparently two reductions are used:

batch_reduction – Reduction operation to apply for the loss across the batch, can be one of [“mean”, “sum”] or None.

point_reduction – Reduction operation to apply for the loss across the points, can be one of [“mean”, “sum”].

Based on the shape of weigths I would assume that it’s used in the batch_reduction since each sample is supposed to get a weight value.

Based on your description I understand that you would rather want to add weights to specific points instead. If so, I would have tried to apply the weighting to the unreduced loss, but it doesn’t seem that the reduction option provide the 'none' option.
In that case you could try to implement your custom weighted chamfer_distance by copying the implementation from here and add the point weighting manually.

Thanks a lot for the explanation!
I believe you are right considering the weight use for batching.
While this isn’t the best case outcome i’m happy to have understood it a little more and will give implementing it a shot in the next few days.

One more thing:
could you elaborate on what you mean by “unreduced loss”?

The native PyTorch loss functions, such as nn.CrossEntropyLoss, allow you to use reduction='none', which will calculate the loss for each sample/pixel etc. without calculating the sum or mean (default) of it. The result would thus not be a scalar value but a tensor which you could reduce afterwards (e.g. by calling mean on it). This allows for more flexibility as you would have the chance to add an elementwise weighting etc. before the final reduction.