Loss function without directly connecting to network output tensor

Hi, I also face the similar problem. Since the output of NN could not be used to calculate loss, it has to be transported to an observable sensor signal, which chould be compare with desired ones.

For real application, I need RNN to generate driving signal for my robot motor, however, I do not know the optimal driving signals. The only criteria is the performance of the robot under aforementioned driving signals matches what we want.

The relationship between driving signal and response are 90% known with some uncertainties, thus I would like not to use another NN to training the relationship, which increases the complexity.

Might I call it indirect loss function trainning? People suggest me to use Straight Through Estimator to connect the driving signal to the observable sensor signal. Also, I research on Directed Acyclic Graph, wondering how to make the “blackbox” operation in the graph for loss backpropagation.

Defining the mid-variables to be “requires_grad=True” seems not helpful, the loss keeps all the same during training.

Please refer to the following URL.

How to backpropagate a black box generated cost value - autograd - PyTorch Forums

I am looking forwards to any suggestion or comment on this subject.

Best regards