Importing TensorFlow model for Inference in Torchscript

I would like to use a trained model I have in TensorFlow for inference in TorchScript. I would like to compare the performance between the two.

What is the best method to do this?

I believe it would be the following:
Convert TensorFlow model to frozen graph (.pb) --> Convert to ONNX (.onnx) --> Convert to PyTorch model (.pt)

Is this correct? Has anyone tried this approach before? Is there another approach that is recommended?

Or is there a way to create a TorchScript from an onnx model?