Torch.quantization on JIT exported model

I saw the following example in the torch.quantization announcement:

model = ResNet50()
model.load_state_dict(torch.load(“model.pt”))

qmodel = quantization.prepare(
model, {“”: quantization.default_qconfig})

qmodel.eval()
for batch, target in data_loader:
model(batch)

qmodel = quantization.convert(qmodel)

Is there a way to get an example like this to work with a v1.3.0 JIT traced/exported model?

1 Like