RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

How should I fix this error? This model ckpt is built by torch lightning and can be downloaded from https://github.com/ozanciga/self-supervised-histopathology/releases/tag/tenpercent

import torch
def get_model_ciga(path='_ckpt_epoch_9.ckpt'):
    """Model downloaded from: https://github.com/ozanciga/self-supervised-histopathology
    """
    
    from torchvision import models
    def load_model_weights(model, weights):

        model_dict = model.state_dict()
        weights = {k: v for k, v in weights.items() if k in model_dict}
        if weights == {}:
            print('No weight could be loaded..')
        model_dict.update(weights)
        model.load_state_dict(model_dict)

        return model

    model = models.__dict__['resnet18'](pretrained=False)
    state = torch.load(path, map_location='cuda:0')

    state_dict = state['state_dict']
    for key in list(state_dict.keys()):
        state_dict[key.replace('model.', '').replace('resnet.', '')] = state_dict.pop(key)

    model = load_model_weights(model, state_dict)

    return model.cuda() 

model = get_model_ciga('tenpercent_resnet18.ckpt')
feature_extractor = torch.nn.Sequential(*list(model.children())[:-1])


type(rgb_img)
PIL.JpegImagePlugin.JpegImageFile



from·torchvision·import·transforms

pil_to_tensor·=·transforms.ToTensor()(rgb_img).unsqueeze_(0)

0.6s

Python

* Code
* Markdown

[26]

1

pil_to_tensor.shape


from torchvision import transforms

pil_to_tensor = transforms.ToTensor()(rgb_img).unsqueeze_(0)
pil_to_tensor.shape
torch.Size([1, 3, 256, 256])

output = feature_extractor(pil_to_tensor)

Error is:


--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Input In [27], in <module> ----> 1 output = feature_extractor(pil_to_tensor) File ~/research/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/module.py:1102, in Module._call_impl(self, *input, **kwargs) **1098** *# If we don't have any hooks, we want to skip the rest of the logic in* **1099** *# this function, and just call forward.* **1100** **if** **not** (self._backward_hooks **or** self._forward_hooks **or** self._forward_pre_hooks **or** _global_backward_hooks **1101** **or** _global_forward_hooks **or** _global_forward_pre_hooks): -> 1102 **return** forward_call(*input, **kwargs) **1103** *# Do not call functions when jit is used* **1104** full_backward_hooks, non_full_backward_hooks = [], []


File ~/research/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/container.py:141, in Sequential.forward(self, input) **139** **def** forward(self, input): **140** **for** module **in** self: --> 141 input = module(input) **142** **return** input File ~/research/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/module.py:1102, in Module._call_impl(self, *input, **kwargs) **1098** *# If we don't have any hooks, we want to skip the rest of the logic in* **1099** *# this function, and just call forward.* **1100** **if** **not** (self._backward_hooks **or** self._forward_hooks **or** self._forward_pre_hooks **or** _global_backward_hooks **1101** **or** _global_forward_hooks **or** _global_forward_pre_hooks): -> 1102 **return** forward_call(*input, **kwargs) **1103** *# Do not call functions when jit is used*

1104 full_backward_hooks, non_full_backward_hooks = [], []

File ~/research/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/conv.py:446, in Conv2d.forward(self, input)
    445 def forward(self, input: Tensor) -> Tensor:
--> 446     return self._conv_forward(input, self.weight, self.bias)

File ~/research/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/conv.py:442, in Conv2d._conv_forward(self, input, weight, bias)
    438 if self.padding_mode != 'zeros':
    439     return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
    440                     weight, bias, self.stride,
    441                     _pair(0), self.dilation, self.groups)
--> 442 return F.conv2d(input, weight, bias, self.stride,
    443                 self.padding, self.dilation, self.groups)

RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

You are not pushing the input tensor to the GPU, which is why your code fails with the device mismatch.

1 Like