I want to create a custom loss function to train the model.
What the model is learning is a 2d transformation matrix.
To do this, the feature points of the image are detected and matched. (using opencv + numpy)
As a result,
loss = (the distance between feature points).sum() / num_matche
I want to train the model to make loss zero.
is it possible to make custom loss function using opencv + numpy?
If so, is there an example that implements this?