Sorry for the newbie question.
16GB VRAM GPU is expensive, so I’m thinking of purchasing 8GB VRAM GPU x2, but is it possible to use 8GB VRAM GPU x2 to handle image size processing and modeling that can only be done with 16GB VRAM GPU? Will it be out of memory even if I process it with DDP?
Hard to tell without knowing anything about the use case so try to lease a node with multiple GPUs, execute your DDP code, and profile the memory usage.
@ptrblck Thank you for your reply. I haven’t purchased it, so it’s hard to verify. In general, for example, is it possible to migrate large image sizes that can barely be processed by a 16GB VRAM GPU to two 8GB VRAM GPUs with simple CNN classification?