Assigning manually derived gradient to an optimizer

I’m looking for an equivalent expression in TensorFlow

        
        opt = tf.train.AdamOptimizer(learning_rate)
        train_op = opt.apply_gradients([(gradients, variable)])

The reason behind is that this gradient isn’t derived directly from differentiating the objective function (but a result of a function F that involves several other different objective functiongradients) and I want to apply this combined gradient to update its older version by using Adam.
Here, there is no need to use backward() but simply update variable with in ADAM way with that gradient

1 Like