I am using torch.nn.parallel.DistributedDataParallel
to parallelize the training loop in a one-node multiple GPUs setting. When I’m just using torch.nn.parallel.DistributedDataParallel
to wrap the model, I would get the following error
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one.
This error indicates that your module has parameters that were not used in producing loss.
You can enable unused parameter detection by passing the keyword argument
find_unused_parameters=True
totorch.nn.parallel.DistributedDataParallel
,
and by making sure allforward
function outputs participate in calculating
loss. If you already have done the above, then the distributed data parallel
module wasn’t able to locate the output tensors in the return value of your module’sforward
function. Please include the loss function and the structure of the return value offorward
of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 2: 0 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
However, once I turn the find_unused_parameters=True
as suggested in the error message, the error goes away. This concerns me a bit as if I’m understanding correctly, find_unused_parameters=True
is only supposed to show me variables that were not used for calculating the final loss (which I do not think there are any), and not supposed to “fix” anything. Does anyone have any idea why the wrapper behaves this way?