I am trying to train a tiny network `my_layer`

which will generate an output tensor `g`

ranging from 0 to 1 to control other part of numpy based error calculation.

g = my_layer(input_tensor) # This part is standard PyTorch setup I am familiar with

numpy_error = numpy_evaluation_module(g)

I need to treat `numpy_error`

as my loss and back propagate gradients all the way back to `input_tensor`

.

I guess I need to modify `numpy_evaluation_module`

to make it a PyTorch tensor. But my observation is that, the loss value did not change at all in my case. So I guess there might be some broken places along my back propagate chain and something was not differentiable.

Could someone suggest some potential places for me to check? Is there any recommended method to track the back propagation flow to quickly identify where the broken variable/tensor was?