Why does quantized tensor * 2 returns error?

I just recently figured out this simple operation returns error.

class QNet(nn.Module):
    def __init__(self):
        super(QNet, self).__init__()
        self.quant = QuantStub()
        self.dequant = DeQuantStub()
        self.conv = nn.Conv2d(1, 1, 1)
        self.bn = nn.BatchNorm2d(1)
        self.relu = nn.ReLU()

    def forward(self, x):
        x = self.quant(x)
        x = self.conv(x)
        x = x * 2           # <<<< Error
        x = self.bn(x)
        x = self.relu(x)
        x = self.dequant(x)
        return x

NotImplementedError: Could not run 'aten::empty_strided' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty_strided' is only available for these backends: [CPU, CUDA, Meta, BackendSelect, Python, Named, Conjugate, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].

Is this because quantized tensor only operates with quantized objects?

please take a look at the response here: NotImplementedError: Could not run 'aten::empty.memory_format' with arguments from the 'QuantizedCPU' backend - #3 by jerryzh168

I really appreciated your reply! However, I realized some FloatFunctional() ops are slower than FP32?