Is there any way to update the parameter and pass the grad?

I want to implement the binary neural network in pytorch.

In the binary neural network, the weights are binarized to +1 or -1 based on the original float weights, and the original float weights are updated in backward according to the grad of binarized weights.

But the way in the existing BinaryNet.pytorch is wrong, the original float weights are not updated at all for it directly modifies the .data, https://github.com/itayhubara/BinaryNet.pytorch/blob/master/models/binarized_modules.py#L96-L101 .

Is there any way to update the parameter and pass the grad correctly?

可以看看这篇文章