How is model size determined for quantize aware training

I am trying quantize aware training on a GAN model and am quantizing the generator part. I use two approaches:

  1. I quantize the entire network including all the layers in my network
  2. I quantize only the first conv layer (conv2d, relu) and leave the rest of the layers as is. (I do set qconfig accordingly during training, None for the ones I’m not quantizing and qnnpack for the first conv layer)

However, my model size for both the cases are same. Could anyone tell me how can that be possible?