Why dynamic quantization requires input dimension >=2?

I got the error of "The dimension of input tensor should be larger than or equal to 2" when calling torch.jit.trace() on a dynamic quantized model. The error message is from this line of source code of PyTorch:

Why “input dimension >=2” is a hard check here?

Example code:
model architecture:

  (MLP_layer_0): Linear(
    (model): Sequential(
      (0): Linear(in_features=2, out_features=10, bias=True)
      (1): Sigmoid()
  (MLP_layer_1): Linear(
    (model): Linear(in_features=10, out_features=1, bias=True)

Then calling dynamic quantization:

model = torch.quantization.quantize_dynamic(model)

Then calling torch.jit.trace() and hit above error:
model = torch.jit.trace(model, ...)

this is the requirement of the backend, in this case fbgemm.
The fbgemm library expects inputs to the linear op to be the following
C(output) = A(input) x B(weight), where C, A, B are M x N, M x K, K x N

thanks for quick response.

I’m not familiar with the backend CPP stuff.
In my above step, do you suggest to change Linear layer shape, or adding extra dimension on my example_inputs tensor for torch.jit.trace(model, example_inputs) method?