How to autograd `x = torch.round(x / self.scale) * self.scale`

class Conv2dQuant(nn.Module):
    def __init__(self):
        super(Conv2dQuant, self).__init__()
        self.conv_weight = 5
        self.conv_bias = 5
        self.scale = 3
    def forward(self, x):
        x = torch.round(x / self.scale) * self.scale
        x = x * self.conv_weight + self.conv_bias
        return x

If i define a module above, I know how to forward, However, When it comes to backward, I do not know the process of round function, and I wonder how to autograd this functions x = torch.round(x / self.scale) * self.scale

I try to train this function, as weight and bias a autogrand = True, Then I got

      7     def forward(self, x):
----> 8         x = torch.round(x / self.scale) * self.scale
      9         x = x * self.conv_weight + self.conv_bias
     10         return x

RuntimeError: round_vml_cpu not implemented for 'Long'

I’m not sure, if I understood the use case correctly, but if you would like to train self.conv_weight and self.conv_bias, you should define them as nn.Parameters (containing float values):

class Conv2dQuant(nn.Module):
    def __init__(self):
        super(Conv2dQuant, self).__init__()
        self.conv_weight = nn.Parameter(torch.tensor([5.]))
        self.conv_bias = nn.Parameter(torch.tensor([5.]))
        self.scale = 3
    def forward(self, x):
        x = torch.round(x / self.scale) * self.scale
        x = x * self.conv_weight + self.conv_bias
        return x

model = Conv2dQuant()
x = torch.randn(1, 1)
output = model(x)
output.backward()
print(model.conv_weight.grad)
tensor([3.])