Error when use quantize and torch.jit.script for transformerencoder

Hi,
I tried to use the Pytorch build-in transformerencoder for my model training. In inference, I tried to quantize it and export it for libtorch in C++. I found there was some strange error when I use both quantize and torch.jit.script. However, if I only use quantize or torch.jit.script, it works well.
I am using Pytorch 1.5.1 and cuda 10.2

The error is as follow when it calls torch.jit.script(module)

RuntimeError: method cannot be used as a value: File “/usr/local/lib/python3.6/dist-packages/torch/nn/modules/activation.py”, line 831 self.in_proj_weight, self.in_proj_bias, self.bias_k, self.bias_v, self.add_zero_attn, self.dropout, self.out_proj.weight, self.out_proj.bias, ~~~~~~~~~~~~~~~~~~~~ <— HERE training=self.training, key_padding_mask=key_padding_mask, need_weights=need_weights,

You could reproduce this error easily by running the following sample code.

import torch class MyModule(torch.nn.Module):
def init(self, hidden_size, nhead):
super(MyModule, self).init()
encoder_layer = torch.nn.TransformerEncoderLayer(hidden_size, nhead=nhead,
dim_feedforward=hidden_size, dropout=0.1, activation=‘gelu’)
encoder_norm = torch.nn.LayerNorm(hidden_size)
self.encoder = torch.nn.TransformerEncoder(encoder_layer, 20, encoder_norm)
def forward(self, x):
out = self.encoder(out)
return out

my_module = MyModule(1024, 8)
my_module = torch.quantization.quantize_dynamic(my_module, dtype=torch.qint8)
sm = torch.jit.script(my_module)
torch.jit.save(sm, ‘test.jit.model’)