Using multiple models in single microservice

I am building service and microservice for all ml models for it.

  1. Are there downsides and which is most reasonable way to do it?
  2. If I had to split models in two microservices running on single GPU (even more to say, if one of them is docker container and another systemctl daemon) would it eventually backfire and how?