Offline Calibration for Activations for GRUs/LSTMs

Currently static quantization doesn’t seem to be supported for recurrent networks (RNNs/LSTMs/GRUs). Only dynamic quantization seems to be supported. Can offline calibration for activations for any model with recurrent networks be done using native PyTorch quantization primitives to mimic static quantization?