Different behavior between multi or single gpu when use @property

I set single or multi gpu through

    os.environ['CUDA_VISIBLE_DEVICES'] = '0'  or '0,1,2,3'
    model = custom_model()
    model = torch.nn.DataParallel(model).cuda()

and i customize my model with some @property

    class xxx(nn.Module):
        def __init__(self):
            super(xxx, self).__init__()
            self.temp = None
        def forward(self, x):
            result = some_layer_calcute(x)
            self.temp = reslut
            return result
        def get_temp(self):
            loss_result = some_steps(self.temp)
            return loss_result

I want the value returned by get_temp() to calculate loss, so I did as below:

        layer_list = []
        for m in model.modules():
            if m.__str__().startswith('xxx'):

And then in every training step I get the value as below:

        for m in layer_list:
            loss += m.get_temp

And it seems work well when I use single GPU, but after I switch to multi-gpus, some errors occurred:

        ‘xxx’ object has no attribute 'get_temp'

Does anyone know how to fix this error, thank you:)