Run inference on Android for the quantized model

What is the use case for LiteModelLoader?(https://github.com/pytorch/pytorch/blob/master/android/pytorch_android/src/main/java/org/pytorch/LiteModuleLoader.java)
torch::jit::load in general supports both of the regular and quantized model, does it already cover the functionality of LiteModelLoader?