Documentation saving/loading quantized models

Hi guys!

Thanks for providing support for quantization in PyTorch.

Consider updating the docs regarding saving/loading quantized models till eager-mode quantization (manual tagging) is supported.

Notice that you do have

I’m using the eager mode; the example didn’t work off the shelf. It took me a while to realize that the absence of De/[]-QuantStub was killing me.

Notice that you were OK with adding De/[]-QuantStub boilerplate here

Cheers,
Victor

hi, this is documented in the section on Eager mode quantization - see Quantization — PyTorch 1.13 documentation

You are right!

Perhaps, for completeness, it might be great to reference that link in the section that I mentioned :blush: .