DDP with Gradient checkpointing

Since my method is an Autoregressive algorithm It is making a huge gradient tape, I am trying to do something like this


for i in range(len(maxtrix.shape)):
    output = torch.utils.checkpoint.checkpoint(NNModel(matrix[i]))

loss = -output.mean()

where NNModel is a torch.nn.Module.

It works fine on single GPU but on DDP it throws this error

RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 30 with name module.model.decoder.decoder_network.layers.1.weight has been marked as ready twice. This means that multiple autograd engine  hooks have fired for this particular parameter during this iteration.

I am running it with find_unused_parameters=False
Any workaround for this?

@shivammehta007 Can you try with find_unused_parameters=True? Also, can you provide a self contained repro of the issue?

Another option is you could use set_static_graph: pytorch/distributed.py at master · pytorch/pytorch · GitHub, if the parameters used in each iteration of your model are always the same.