Returning a dictionary from forward call breaks DDP

Hello,
My forward call returns a dictionary, out_dict, as follows:

out_dict = {'main_predict_op': main_differentiable_op,
            'secondary_predict_op': second_differentiable_op}

It seems DDP does not like this and throws me the following warning:

RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).

I have set find_unused_parameters=True. From what I understand, output tensor being encapsulated in a dictionary is not favorable for DDP.

How would I go about this?

Responding to anyone who faces this. The fact that a tensor is wrapped in a dictionary is not relevant. DDP requires that all output variables be used in the graph. Calling ‘unused_parameters’ option did pretty much nothing.

Hi,

the find_unused_params option in DDP will work if certain params are not used in forward pass, but then all params used in fwd pass get gradients in the backward pass.

There is another case where all params are used in fwd pass but some don’t get grad in backwards, for example if you have:

a, b = forward()our 
loss = a.sum()
loss.backward()

It seems like this is what may be happening in your training, but would need your training loop to verify this.