Question about deploying the model

We can deploy a model by torch.jit.trace then write the inference with libtorch or transform to onnx then write the inference with caffe2. What’s the difference?

And in the libtorch, does all *.so are required.or just use a part of them.