Hi,

I am writing a custom binarization layer which takes an input and outputs its binarization result. As this process does not change the expectation of the input variable, the back-prop gradient should stay unchanged.

```
import torch
from torch.autograd import Function
import torch.nn as nn
class _DefineBinarization(Function):
'''As this binarization process does not change expectation,
we pass the gradient unchanged.
'''
@staticmethod
def forward(ctx, input):
p = torch.bernoulli((1+input)/2).to(torch.bool)
np = torch.bitwise_not(p)
input[p] = 1
input[np] = -1
return input
@staticmethod
def backward(ctx, grad):
return grad
class Binarization(nn.Module):
def __init__(self):
super(Binarization, self).__init__()
return
def forward(self, input):
return _DefineBinarization.apply(input)
class BinarizeLayer(nn.Module):
def __init__(self, num):
super(BinarizeLayer, self).__init__()
self.binarize = Binarization()
self.tanh = nn.Tanh()
def forward(self, input):
input = self.tanh(input)
input = (self.binarize(input)+1)/2
return input
binarize_layer = BinarizeLayer(10)
x = torch.rand(1,10).requires_grad_()
y = binarize_layer(x)
h = torch.mean(y)
h.backward()
```

This gives me the error “RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 10]], which is output 0 of TanhBackward, is at version 2; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).”

Shouldn’t Pytorch simply follow my backward function without checking whether this forward operation is dirty or not?

Thanks!