Changing model structure in nn.DataParallel


I want to change the model layers of my previously trained models. For example, I have a MobileNet trained on my data, and now I want to remove the FC layers. For this purpose, I replace the FC layer with a class called Identity, which you can find the definition below. (I took this suggestion from one of the admins in here, but I do not remember the post)

class Identity(nn.Module):
    def __init__(self):
        super(Identity, self).__init__()

    def forward(self, x):
        return x

The procedures are this:

  1. Define a MobileNet by using torchvision.model
  2. Move the model to nn.DataParallel
  3. Load the previously trained model
  4. Replace FC layers with Identity class

Now, the problem here is that the model is not fully replaced, and I think it is because there are other copies on other GPUs as well.

Is there any solution to change all the models? This is the result of print(model).

(module): MobileNetV2(
(features): Sequential(

(classifier): Sequential(
(0): Linear(in_features=1280, out_features=512, bias=True)
(1): ReLU()
(2): Dropout(p=0.2, inplace=False)
(3): Linear(in_features=512, out_features=7, bias=True)
(classifier): Identity()

You can see that there are two classifiers (FC layers) because I use 2 GPUs.

One More Important thing:

When I set the number of GPUs to 1, so I do not have the above problem, I get this error:

OpenBLAS Warning : Detect OpenMP Loop and this application may hang. Please rebuild the library with USE_OPENMP=1 option.

Break your model out of DataParallel, replace the layers, and then wrap it with DataParallel again. I’m assuming you won’t need to do this enough to cause any severe overhead.

I’m unsure on that warning. I’m assuming this is a separate problem altogether.

1 Like

Thanks for you answer.

Actually I have though about your idea, but I wanted to do with same procedures I have done in my previous codes.

BTW, is there any specific code to break out from DataParallel? I was not able to find any.

You can get it out like so: model = model.module where model was initially wrapped by DataParallel.

It’s kind of annoying because, depending on your code, you might have to have if statements surrounding it. (eg., if using single gpu, don’t do this)

1 Like

Thanks for your responses.

I will put the code maybe it is useful for others:

# Unwrap from DataParallel
    if torch.cuda.device_count() > 1:
        model = model.module

    model.classifier = Identity()

    if torch.cuda.device_count() > 1:
        print("Wrap Again with DataParallel with ", torch.cuda.device_count(),
        model = nn.DataParallel(model)