Hi,
When using torch.nn.DataParallel(), we obtain batch data through dataset such as ,for batch in dataset. At this time, should the batch in the dataset be set to the batch of all cards or a single card?
Thank you!
Hi,
When using torch.nn.DataParallel(), we obtain batch data through dataset such as ,for batch in dataset. At this time, should the batch in the dataset be set to the batch of all cards or a single card?
Thank you!
The deprecated nn.DataParallel
module will split the input batch and send each chunk to the corresponding GPU. The batch size is this the “global” batch size.