ONNX export with optional arguments

Is there a way to do an ONNX export of a model where one of its arguments is optional?
For example:

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.net = nn.Linear(256, 256)

    def forward(self, x, bias: Optional[torch.Tensor] = None):
        x = self.net(x)
        if bias is not None:
            x = x + bias
        return x

Then I would like the following to work:

net = Model().eval()
x = torch.randn(4, 256)
b = torch.randn(4, 256)

torch.onnx.export(net, (x,None), '/tmp/model.onnx', opset_version=19, 
                  input_names=['x', 'bias'],
                  output_names=['y'],
                  dynamic_axes={'x'     : {0: 'B'},
                                'y'     : {0: 'B'},
                                'bias'  : {0: 'B'}})

ort = onnxruntime.InferenceSession('/tmp/model.onnx', providers=["CPUExecutionProvider"])

x = torch.randn(2, 256)
b = torch.randn(2, 256)

y1  = net(x, b)
y2, = ort.run(None, {'x': x.numpy(), 'bias': b.numpy()})
np.testing.assert_allclose(y1.numpy(), y2)

thank you

I tried first scripting the model using:

net = torch.jit.script(net)

But that didn’t help.