Hi @jerryzh168, yes. Updated initially to 1.7.0.dev20200705+cpu and just tried torch-1.7.0.dev20200724+cpu. No luck with either.
As I hijacked an old thread, I thought best to raise a separate issue with a simple example (single fully connected layer) to replicate -
I’ve had one reply with comment explaining that exporting of quantized models is not yet supported and a link to another thread. Sounds like it’s WIP. Would be good to get your take on the example in the other thread.