Custom endpoint URL for TorchServe

Hi there, I’m trying to deploy a model that must be served at localhost:8080/invocations but TorchServe only seems to allow for localhost:8080/predictions/<model_name>. I can’t see any documentation or examples of this anywhere. You can specify the inference_address in config.properties but can only specify the port not the entire url. Is this possible, or will I have to serve using something like Flask? Thank you