Why could not quantize LSTMCell using static custom quantization

Is there a way to quantize LSTMCell using custom modules like LSTM?
I tried to implement this manually (by replacing lstm_utils with lstmcell_utils that I have made) but I got this error:
AttributeError: 'tuple' object has no attribute 'numel'

I don’t understand why it is not possible when you have managed to do it for the LSTM layer (which normally is more complicated).

I could not have a graph module with quantizer and dequantizer blocks inserted!

I am using torch.fx

Looking forward to hearing from you!
Thanks,

Hi Ahmed,

The short answer is no, unfortunately. This would take significant engineering work since we made LSTM a special case throughout the quantization codebase. The error you’re seeing is saying that an observer got a tuple as input, which it doesn’t understand because it was expecting a single tensor. This is because LSTM (and LSTMCell) outputs a tuple, not a single tensor. To make this work you will need a lot of special code to mutate the graph so that each observer gets a single tensor as input instead. If you’re really determined to make this work you could follow this PR: https://github.com/pytorch/pytorch/pull/85068. However, we probably won’t be able to merge this change into the main branch. Feel free to let me know if I can provide any additional context.

Best,
-Andrew

Hello @andrewor,
Thank you for your reply!

Actually, I changed my models’ architecture and replaced LSTMCell with the LSTM layer to make it quantizable.
It would be so helpful if you could take a look at this issue where I want to custom quantize the LSTM layer. Torch.ao.quantization.fx: Problem with custom LSTM quantization - #7 by Ahmed_Louati