After I used torch.quantization.quantize_dynamic()
to quantize the original model, I saved and loaded the quantized model. But, when I ran inference, it returned this error. The original model still ran inference well, I don’t know why
Traceback (most recent call last):
File "inference.py", line 81, in <module>
output = infer(args.text, model)
File "inference.py", line 30, in infer
mel_outputs, mel_outputs_postnet, _, alignments = model.inference(sequence)
File "/media/tma/DATA/Khai-folder/Tacotron2-PyTorch/model/model.py", line 542, in inference
encoder_outputs = self.encoder.inference(embedded_inputs)
File "/media/tma/DATA/Khai-folder/Tacotron2-PyTorch/model/model.py", line 219, in inference
self.lstm.flatten_parameters()
File "/media/tma/DATA/miniconda3/envs/ttsv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 576, in __get
attr__
type(self).__name__, name))
AttributeError: 'LSTM' object has no attribute 'flatten_parameters'