Quantization layer implementation

Hi there,

I would like to implement a single quantization layer with the forward pass of

x = floor(y / scale + offset)

where scale and offset are fixed scalars and given in the initialization. Do I need to care about backpropagation or the autograd will automatically take care of it?

Sorry for the newbie question
Thank you

1 Like

Hi! I think, this will do the job:

class Quantization(nn.Module):
  def __init__(self, scale, offset):
    super(Quantization, self).__init__()
    self.scale=scale
    self.offset=offset
    
  def forward(self,x):
    return torch.floor(x/self.scale + self.offset)

And then You can use it as follows:

x = torch.rand(4,1, requires_grad=True)
A = torch.rand(1,4, requires_grad=True)
q = Quantization(0.1, 1.0)
y = q(x)
z = torch.matmul(A,y)
z.backward()

To print gradients:

print(A.grad)
print(x.grad)
1 Like

Thank a lot. I will give it a try