Add custom regularization in distributed mode

Hi!

I’m testing adding custom regularization to loss function like this:

             regu = torch.tensor([0.]).to(torch.device('cuda'))
             for name, param in model.named_parameters():
                 if 'alpha' in name:
                     print(name)
                     regu += param**2.
             loss = criterion(outputs, targets) + regu

It worked well when I used 1 gpu.
But after I changed the code to test it in distributed mode, it gave me an error.

             regu = torch.tensor([0.]).to(torch.device('cuda'))
             if args.distributed:
                 for name, param in model.module.named_parameters():
                     if 'alpha' in name:
                         print(name)
                         regu += param**2.
                 loss = criterion(outputs, targets) + regu

Error Message:
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forward function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple checkpoint functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 316 has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration. You can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print parameter names for further debugging.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 605186) of binary: /home/sis/vautoformer/bin/python

I can’t understand why it doesn’t work.
Could you give me any advice about this?

I guess you might be using the parameters to compute the regu loss outside of the forward as indicated by the 1) point in the error message?
If so, would it be possible to move this calculation into the forward?

When I comment out + regu from loss = criterion(outputs, targets) + regu, everything goes well.
So, the method I wrote above looks having the wrong point.

Oh, I misunderstood what you said.
I tested modified code below:

 class custom_forward(torch.nn.Module):
      def __init__(self, model):
          super().__init__()
          self.model = model

      def forward(self, x):
          regu = []
          for name, param in self.model.named_parameters():
               if 'alpha' in name:
                  regu.append(param**2.)
          return x + torch.stack(regu, dim=0).sum(dim=0)

for _ in range(epoch):
    outputs = model(data)
    loss = custom_forward(model)(criterion(outputs, targets))

But I got the same message like this:

Traceback (most recent call last):
File “supernet_train.py”, line 407, in
main(args)
File “supernet_train.py”, line 356, in main
train_stats = train_one_epoch(
File “/home/sis/Cream/0724/supernet_engine.py”, line 151, in train_one_epoch
loss_scaler(loss, optimizer, clip_grad=max_norm,
File “/home/sis/vautoformer/lib/python3.8/site-packages/timm/utils/cuda.py”, line 43, in call
self._scaler.scale(loss).backward(create_graph=create_graph)
File “/home/sis/vautoformer/lib/python3.8/site-packages/torch/_tensor.py”, line 396, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File “/home/sis/vautoformer/lib/python3.8/site-packages/torch/autograd/init.py”, line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forward function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple checkpoint functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 316 has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration. You can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print parameter names for further debugging.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 1021345) of binary: /home/sis/vautoformer/bin/python

The loss calculation using the parameters is still outside if the model’s forward method, so the same error would be expected.

I couldn’t find a way to move the custom regularization term into forward as you said.
So, I solved this problem by applying 2-stage training like:

  for _ in range(epoch):
      outputs = model(data)
      loss = criterion(outputs, targets)
      loss.backward()
      optimizer.step()
       
      regu = []
      for name, param in model.named_parameters():
          if 'alpha' in name:
              regu.append(param**2.)
      loss2 = torch.stack(regu, dim=0).mean()
      loss2.backward()
      optimizer2.step()

And my code looks working.
But I’m not sure whether this way is fine.
If there’s a better way, please let me know.
Thank you for you help!