How can I calculate gradients with respect to a loss defined by the gradients?

For example, in tensorflow, I could call the function tf.gradients() two times, using the first result in the second function call.

My specific problem is this, I am implementing TRPO, and I have:

flat_grad <-flattened gradients of network parameters w.r.t. a loss

x <- a tensor with the same shape as flat_grad

and I need the gradients of the network parameters w.r.t (flat_grad * x)

In the process of flattening the gradients, I had to convert everything into a numpy array, which broke the backprop chain. How can I solve this problem?