Torch.jit.script does not work on a quantized model

I am a newbie to quantization, and I am trying quantize a model by following the tutorial ((prototype) PyTorch 2 Export Post Training Quantization — PyTorch Tutorials 2.4.0+cu121 documentation), I encountered an error in calling the torch.jit.script(model) in print_size_of_model function for the quantized model, and got following error:

torch.jit.script(model)
/home/theo/anaconda3/envs/mgi/lib/python3.10/site-packages/torch/jit/_check.py:178: UserWarning: The TorchScript type system doesn’t support instance-level annotations on empty non-base types in __init__. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in torch.jit.Attribute.
warnings.warn(
Traceback (most recent call last):
File “/home/theo/anaconda3/envs/mgi/lib/python3.10/site-packages/torch/_jit_internal.py”, line 385, in get_type_hint_captures
src = inspect.getsource(fn)
File “/home/theo/anaconda3/envs/mgi/lib/python3.10/inspect.py”, line 1139, in getsource
lines, lnum = getsourcelines(object)
File “/home/theo/anaconda3/envs/mgi/lib/python3.10/inspect.py”, line 1121, in getsourcelines
lines, lnum = findsource(object)
File “/home/theo/anaconda3/envs/mgi/lib/python3.10/inspect.py”, line 958, in findsource
raise OSError(‘could not get source code’)
OSError: could not get source code

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “”, line 1, in
File “/home/theo/anaconda3/envs/mgi/lib/python3.10/site-packages/torch/jit/_script.py”, line 1432, in script
return _script_impl(
File “/home/theo/anaconda3/envs/mgi/lib/python3.10/site-packages/torch/jit/_script.py”, line 1146, in _script_impl
return torch.jit._recursive.create_script_module(
File “/home/theo/anaconda3/envs/mgi/lib/python3.10/site-packages/torch/jit/_recursive.py”, line 559, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File “/home/theo/anaconda3/envs/mgi/lib/python3.10/site-packages/torch/jit/_recursive.py”, line 636, in create_script_module_impl
create_methods_and_properties_from_stubs(
File “/home/theo/anaconda3/envs/mgi/lib/python3.10/site-packages/torch/jit/_recursive.py”, line 468, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(
File “/home/theo/anaconda3/envs/mgi/lib/python3.10/site-packages/torch/jit/_recursive.py”, line 1004, in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
File “/home/theo/anaconda3/envs/mgi/lib/python3.10/site-packages/torch/jit/_script.py”, line 1432, in script
return _script_impl(
File “/home/theo/anaconda3/envs/mgi/lib/python3.10/site-packages/torch/jit/_script.py”, line 1204, in _script_impl
fn = torch._C._jit_script_compile(
File “/home/theo/anaconda3/envs/mgi/lib/python3.10/site-packages/torch/jit/annotations.py”, line 502, in try_ann_to_type
return torch.jit._script._recursive_compile_class(ann, loc)
File “/home/theo/anaconda3/envs/mgi/lib/python3.10/site-packages/torch/jit/_script.py”, line 1602, in _recursive_compile_class
rcb = _jit_internal.createResolutionCallbackForClassMethods(obj)
File “/home/theo/anaconda3/envs/mgi/lib/python3.10/site-packages/torch/_jit_internal.py”, line 473, in createResolutionCallbackForClassMethods
captures.update(get_type_hint_captures(fn))
File “/home/theo/anaconda3/envs/mgi/lib/python3.10/site-packages/torch/_jit_internal.py”, line 387, in get_type_hint_captures
raise OSError(
OSError: Failed to get source for <function TreeSpec.init at 0x727a3dd68d30> using inspect.getsource

The function print_size_of_model runs successfully for the float resnet model.

What’s the problem for this code, and any advice on fixing it? Thank you

torch.jit.script is kind of deprecated, I don’t think we’ll be fixing this, I’d suggest to try out torch.export, our better export system

Yes, I am trying the export post training quantization ((prototype) PyTorch 2 Export Post Training Quantization — PyTorch Tutorials 2.4.0+cu121 documentation), jit.script is called in print_size_of_model function, do you have new tutorials for export quantization? Thank you

torch.jit.script is no longer supported, you could write your own size_of_model function probably, we also have ao/torchao/utils.py at 174e630af2be8cd18bc47c5e530765a82e97f45b · pytorch/ao · GitHub but it may not work 100% for pt2e flow

get_model_size_in_bytes returns 0 for the quantized_model.

Any advice on how to learn the model quantization? I want to quantize my own model. Thank you

The model size is not critical, I have another problem in running the pt2e flow according to the guide ((prototype) PyTorch 2 Export Post Training Quantization — PyTorch Tutorials 2.4.0+cu121 documentation), I get following error in the last evaluate call for the loaded_quantized_model:
Expected input at *args[0].shape[0] to be equal to 30, but got 50

Please refer Pt2e_quantized_model failed in evaluating, hope you can also help me on this. Thank you

I have some example checking model size in Use `torch.uint1` to `torch.uint7` for Uintx tensor subclass by jerryzh168 · Pull Request #672 · pytorch/ao · GitHub