Pytorch Quantization of VGG16

I’m trying to quantize (8 bits) VGG16 model with EMNIST(balanced) dataset.
My non quantized model works fine but i get an error with the quantization model where when i redo the layers with quantization i get a NAN error as below:

Also attaching the function, need help to see what im missing.

ValueError Traceback (most recent call last)
in ()
----> 1 testQuant(q_model, test_loader, quant=True, stats=stats)

4 frames
in calcScaleZeroPoint(min_val, max_val, num_bits)
22 zero_point = initial_zero_point
23
—> 24 zero_point = int(zero_point)
25
26 return scale, zero_point

ValueError: cannot convert float NaN to integer

def quantForward(model, x, stats):

Quantise before inputting into incoming layers

x = quantize_tensor(x, min_val=stats[‘conv1_1’][‘min’], max_val=stats[‘conv1_1’][‘max’])

x, scale_next, zero_point_next = quantizeLayer(x.tensor, model.conv1_1, stats[‘conv1_2’], x.scale, x.zero_point)

x = F.max_pool2d(x, 2, 2)

x, scale_next, zero_point_next = quantizeLayer(x, model.conv1_2, stats[‘conv2_1’], scale_next, zero_point_next)

x, scale_next, zero_point_next = quantizeLayer(x, model.conv2_1, stats[‘conv2_2’], scale_next, zero_point_next)

x = F.max_pool2d(x, 2, 2)

x, scale_next, zero_point_next = quantizeLayer(x, model.conv2_2, stats[‘conv3_1’], scale_next, zero_point_next)

x, scale_next, zero_point_next = quantizeLayer(x, model.conv3_1, stats[‘conv3_2’], scale_next, zero_point_next)

x, scale_next, zero_point_next = quantizeLayer(x, model.conv3_2, stats[‘conv3_3’], scale_next, zero_point_next)

x = F.max_pool2d(x, 2, 2)

x, scale_next, zero_point_next = quantizeLayer(x, model.conv3_3, stats[‘conv4_1’], scale_next, zero_point_next)

x, scale_next, zero_point_next = quantizeLayer(x, model.conv4_1, stats[‘conv4_2’], scale_next, zero_point_next)

x, scale_next, zero_point_next = quantizeLayer(x, model.conv4_2, stats[‘conv4_3’], scale_next, zero_point_next)

#x = F.max_pool2d(x, 2, 2)

x, scale_next, zero_point_next = quantizeLayer(x, model.conv4_3, stats[‘conv5_1’], scale_next, zero_point_next)

x, scale_next, zero_point_next = quantizeLayer(x, model.conv5_1, stats[‘conv5_2’], scale_next, zero_point_next)

x, scale_next, zero_point_next = quantizeLayer(x, model.conv5_2, stats[‘conv5_3’], scale_next, zero_point_next)

#x = F.max_pool2d(x, 2, 2)

x, scale_next, zero_point_next = quantizeLayer(x, model.conv5_3, stats[‘fc6’], scale_next, zero_point_next)

print(zero_point_next)

x, scale_next, zero_point_next = quantizeLayer(x, model.fc6, stats[‘fc7’], scale_next, zero_point_next)

x = x.view(x.shape[0], -1)

x, scale_next, zero_point_next = quantizeLayer(x, model.fc7, stats[‘fc8’], scale_next, zero_point_next)

Back to dequant for final layer

x = dequantize_tensor(QTensor(tensor=x, scale=scale_next, zero_point=zero_point_next))

x = model.fc8(x)

return F.log_softmax(x, dim=1)

Can you point me to the VGG model you are trying to quantize? Is it th one from torchvision?