Differentiable quantization

Hello all,
So I have a class Quant_conv2d, and the class Quantizer
returns differentiable quantized weights:

class Quant_conv2d(nn.Module):

def __init__(self, levels, sigma=1):
    super(Quant_conv2d, self).__init__()
    # 1 input image channel, 6 output channels, 3x3 square convolution
    # kernel
    self.levels = levels
    self.sigma = sigma
    self.conv1 = nn.Conv2d(1, 1, 1)
    self.new_weight= Quantizer(x= self.conv1.weight, levels= self.levels).forward() 
def forward(self, input):
    return F.conv2d(input= input.cuda(), weight= self.new_weight, bias= torch.tensor([0.0]).cuda()) 

Then I do :

qdata_interim1= Quant_conv2d(levels= torch.tensor([-1.0,0.0,1.0]))
qdata_out1= (qdata_interim1.forward(input= qdata_inp1))
(torch.sum(qdata_out1)).backward()

And then :

for name, value in qdata_interim1.named_parameters():
print(‘grad for Quant_conv2d is:’, value.grad)

I get the grad as: None.
But I expect it to be something wrt Quant_conv2d.conv1.weight

Can anyone tell me how to get around this?
And where exactly am I wrong?