How is it possible to move a model wrapped in DataParallel to CPU?

Hello everyone.
How can I convert a model thats been trained on multiple GPUs (wrapped in DataParallel) to cpu?
I tried model = model.module.to('cpu') without any luck.
How should I be going about this?
Notes:
pn1: I’m trying to save this as a jit traced model thats why I’m trying t o convert it to cpu before hand
pn2: I know I can simply use model.module in torch.jit.trace and then map the weights to cpu when loading the jit model. however, I want to know why this is failing here!

What kind of error are you seeing with the posted line of code?

It gives me :

RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device cpu

basically I did :

model.eval()
model_cpu = model.module.to('cpu')
model_traced = torch.jit.trace(model_cpu, dummy_input)

Are you using to(), cpu() or cuda() operations inside your model?
If not, could you post an executable code snippet to reproduce this issue?

Yes the model is moved into cuda mode! using .to() and its is defined like this :

        model = nn.DataParallel(model)
        metric_fc = ArcMarginModel(args)
        metric_fc = nn.DataParallel(metric_fc)

        if args.optimizer == 'sgd':
            optimizer = InsightFaceOptimizer(
                torch.optim.SGD([{'params': model.parameters()}, {'params': metric_fc.parameters()}],
                                lr=args.lr, momentum=args.mom, weight_decay=args.weight_decay))
        else:
            optimizer = InsightFaceOptimizer(
                torch.optim.Adam([{'params': model.parameters()}, {'params': metric_fc.parameters()}],
                                 lr=args.lr, weight_decay=args.weight_decay))

    else:
        checkpoint = torch.load(checkpoint)
        start_epoch = checkpoint['epoch'] + 1
        epochs_since_improvement = checkpoint['epochs_since_improvement']
        model = checkpoint['model']
        metric_fc = checkpoint['metric_fc']
        optimizer = checkpoint['optimizer']

    logger = get_logger()

    # Move to GPU, if available
    model = model.to(device)
    metric_fc = metric_fc.to(device)

and by the way this is run pytorch 1.5.1

The code snippet is unfortunately not executable so that I cannot debug it. :confused:

Ah I missed the executable part! this is taken from this repo
I’ll try to see if I can create a minimal repreducible example.

1 Like

Creating an MRE! indeed do wonders!
OK, it turns out there was no problem to begin with! the error was comming from the other section of the script that expected a cuda while I was giving it a model in cpu mode and thus the error.
silly me! completely missed that!
Thanks a lot for your generous help @ptrblck really apprecaite it.