How to override gradients during back propagation in pytorch?

Recently, I found a piece of code written in Tensorflow. It serves to overriding tf.round()'s gradient with tf.identity()'s using gradient_override_map function:

import tensorflow as tf
G = tf.get_default_graph()
def quantize(x, k):
n = float(2**k - 1)
with G.gradient_override_map({“Round”: “Identity”}):
return tf.round(x * n) / n

And I want to implement it in pytorch but have no idea…Should I define a new operation?? And could anyone provide some advice on that?

Maybe you can use register_backward_hook?

I am also interested in this.
Same context - quantization.
The TF code can be seen here: https://github.com/ppwwyyxx/tensorpack/blob/master/examples/DoReFa-Net/dorefa.py#L20

What is the equivalent approach in Pytorch for this?

have you tried the method as suggested above register_backward_hook?

it seems as this is not applicable here as register_backward_hook only affects the gradients with respect to the in-/outputs.