Equivalent batch using DataParallel or DistributedDataParallel

Hi! If I, for example, use 25 for the batch size using only one GPU, if I use DataParallel or DistributedDataParallel on 2 GPUs, for me to have the equivalent batch size I should just multiply my batch size by 2, that is, 50?

If so, does this fit even in situations that I configure to use 2 GPUs, but it was not necessary to use more than 1 GPU (it only fit in GPU)?

Thanks in advance!