Hi guys!
Thanks for providing support for quantization in PyTorch.
Consider updating the docs regarding saving/loading quantized models till eager-mode quantization (manual tagging) is supported.
Notice that you do have
I’m using the eager mode; the example didn’t work off the shelf. It took me a while to realize that the absence of De
/[]-QuantStub
was killing me.
Notice that you were OK with adding De
/[]-QuantStub
boilerplate here
Cheers,
Victor