How to transform a pytorch JIT pth model to caffe prototxt/caffemodel

For example:
I use the static quantization tutorial and generate a scripted quantized model.

Since the quantized model is different from the float model owing to quantizing (combining conv+bn into conv and so on). So I can’t obtain the .py file descripting the quantized model after quantization. Therefore I would like to know how to generate caffe prototxt which draws the whole network?

Can you elaborate on your higher-level goals a little bit? We don’t have any direct way to produce a caffe2 model from a PyTorch model, but you can see a description of the compiled model like so

model = torch.jit.load(model_file)

What about exporting to caffe2 in an indirect way? Is it possible to somehow use the scale/zero_point and get the same outputs as in PyTorch?