How to convert pytorch image captioning model(CNN+LSTM-encoder decoder) model to tflite?

I am trying to convert CNN+LSTM (encoder decoder) model mentioned in the following github repo is : Pytorch image captioning

I want to convert this pytorch model to tflite. It has both encoder and decoder checkpoints. As far as i understand both of them have to be converted to tflite (correct me if i am wrong)

approach: using the example mentioned in onnx2keras library, onnx2keras i was able to convert encoder to tflite. but with decoder i am facing the following issue
Not sure what is the right approach. can anyone suggest better approach and help me achieve a tflite model

File “convert_pytorch_tf.py”, line 63, in
change_ordering=False)
File “/root/anaconda3/envs/pyt2tf/lib/python3.7/site-packages/pytorch2keras/converter.py”, line 53, in pytorch_to_keras
dummy_output = model(*args)
File “/root/anaconda3/envs/pyt2tf/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 550, in call
result = self.forward(*input, **kwargs)
TypeError: forward() missing 2 required positional arguments: ‘captions’ and ‘lengths’