Build your own loss function in PyTorch

  1. Yes, you don’t have to write any Lua code when you’re using PyTorch.
  2. Yes, the gradients will be computed automatically, as long as you use Variables all the time (without any .data unpacking or numpy conversions). It won’t work in your example, because you’re doing calculation on numpy arrays.
  3. Optimizers don’t need to know anything about your loss - they only need you to call .backward() on the loss Variable, so that they can see the gradient. They only need a list of Variables that you want to optimize.

Since the code does a lot of operations, the graph recording just the loss function would be likely much larger than that of your model. Because of this, I’d recommend you to write your own autograd function, or think a bit more about how can you compute your similarity matrix. If you’re operating in the Euclidean space, and you rewrite the formulas, it should be possible to batch some computation. As far as I see it could be decomposed into a Gramian matrix plus some norms added to the rows and columns.

16 Likes