Error converting BLSTM to ONNX model

Hi, getting this error when I attempt to convert a BLSTM audio model to ONNX. Is this not supported?

File “convert_model.py”, line 27, in
main()
File “convert_model.py”, line 23, in main
torch_out = torch.onnx._export(model, spect, ‘deepspeech.onnx’, export_params=True, verbose=True)
File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/init.py”, line 21, in _export
return utils._export(*args, **kwargs)
File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/utils.py”, line 226, in _export
example_outputs, propagate)
File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/utils.py”, line 180, in _model_to_graph
graph = _optimize_graph(graph, operator_export_type)
File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/utils.py”, line 107, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/init.py”, line 56, in _run_symbolic_method
return utils._run_symbolic_method(*args, **kwargs)
File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/utils.py”, line 291, in _run_symbolic_method
return symbolic_fn(*args)
File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/symbolic.py”, line 906, in symbolic_flattened_wrapper
return sym(g, input, weights, hiddens, batch_sizes)
File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/symbolic.py”, line 974, in symbolic
weight_ih_f, weight_hh_f, bias_f = transform_weights(2 * i)
File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/symbolic.py”, line 961, in transform_weights
[reform_weights(g, w, hidden_size, reform_permutation) for w in all_weights[layer_index]]
ValueError: not enough values to unpack (expected 4, got 2)