Distributed Data Parallel with two different model GPUs possible?

Hello, I have a machine with a 2080ti and 1080ti. is it possible to do distributed data-parallel on two different types of GPUs?

Sure, pytroch don’t care about it. But please take care the gpu memory allocation.

1 Like