How can use amd and nvidia cards in the same code to do distributed inference?

Hi, since pytorch installation does let you choose between cuda or ROCm usage only and not both, I’m writing this post. How can I use in the same code an amd and nvidia gpu simultaneously in the same computer ? Is this feature already implemented ? I want to experiment and play with distributed inference.