How to backpropagate a black box generated cost value

backpropproblem

I want to minimize the error generated by my black box but there is no grad_fn attached to it. How could the autograd know what to do with this loss value?

1 Like

Autograd cannot create the backward pass without a grad_fn and you should get an error like:

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Did you detach the tensor manually from the computation graph or what is your exact use case?

The thing is the value generated by RNN is “interpreted”. No computations are made on it in blackbox so the error generated is detached. Is there any way I could backpropagate to minimize that detached error (by somehow manually attaching it).

The only valid way I could think of at the moment, is to write the backward function manually for your blackbox methods.

The blackbox is a simulation, I do not think it is possible to write a backward for it. I attempted to call back() on the output of RNN directly giving the error as all the gradients, it failed the system.

The computation graph is built during the forward pass and is used in the backward pass to calculate the gradients.
If you are detaching some tensors from the graph, you would need to write the backward manually, as explained.

If that’s not possible, you won’t be able to compute the gradients using the standard backward call.
Maybe some other training approaches might work (e.g. evolutionary algorithms)?

1 Like