RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor

I am using pretrained Densenet121 from torch vision within my custom nn.module. However, the titled error happens even if I use .cuda(). I tried using .cuda within the init function of CustomNet but it still gives the same error. Please assume I would remove the last layer and replace the classifier of DenseNet with an identity layer before I add the fully-connected layers in the custom net. Thank you very much for your help in advance

class DenseNet121(nn.Module):
    """Model modified.
    The architecture of our model is the same as standard DenseNet121
    except the classifier layer which has an additional sigmoid function.
    """
    def __init__(self, out_size):
        super(DenseNet121, self).__init__()
        self.densenet121 = torchvision.models.densenet121(pretrained=True)
        num_ftrs = self.densenet121.classifier.in_features
        self.in_features = num_ftrs
        self.densenet121.classifier = nn.Sequential(
            nn.Linear(num_ftrs, out_size),
            nn.Sigmoid()
        )

    def forward(self, x):
        x = self.densenet121(x)
        return x
class CustomNet(nn.Module):
    def __init__(self, dense_net, num_ftrs):
        super(CustomNet, self).__init__()
        self.dense_net = dense_net
        #Fully connected layer1
        self.fc1 = nn.Linear(num_ftrs, 512)
        self.fc2 = nn.Linear(512, 512)
        self.output = nn.Linear(512, 9)
        self.leakyRelu = nn.LeakyReLU(0.01)
        
    def forward(self, x):
        features = self.dense_net(x)
        fc1_output = self.leakyRelu(self.fc1(features ))
        fc2_output = self.leakyRelu(self.fc2(fc1_output))
        #Softmax?
        output = self.output(fc2_output)
        return output

Edit:
Apparently it’s within the conv layers of densenet121, not sure how to solve this

The code looks basically alright.
One small side note: I’m wondering why you are using a sigmoid in your DenseNet implementation, but that’s unrelated to your problem.

Could you post the code where you initialize your model and feed the data?

Thank you for the reply. That is for the implementation of CheXnet on X-ray images which uses a sigmoid. As for the initialisation. I simply instantiate the model and uses .cuda

Edit: I almost forgot, I do remove the last layer of densenet121 and add an layer that doesn’t change the input

class Identity(nn.Module):
    def __init__(self):
        super(Identity, self).__init__()
        
    def forward(self, x):
        return x
dense_net = DenseNet121(12)
dense_net.classifier = nn.Sequential(Identity())
custom_net = CustomNet(dense_net, dense_net.in_features)
custom_net = custom_net.cuda(0)

I even checked the model parameters with is_cuda, they return true…I checked the type of the input which is torch.cuda.FloatTensor. The error occurs within the conv layer of self.features(x) in densenet121. Thanks

Did you also check the parameters of custom_net.dense_net.densenet121?
I’m still unsure where this error might come from.

Yes… I tried the line below and it still returns true…

next(custom_net.dense_net.densenet121.parameters()).is_cuda

OK, thanks. Could you try to narrow down the location of the error?
You said the error is thrown in some conv layers of the densenet.
Could you check, if all conv layers are on the GPU?
Did you somehow manipulated the conv layers or changed them in another way?

Thank you for the prompt reply. I have copied torchvision’s code for densenet121 to aid our discussion. I found that the error occurred at self.features(x). Within it, the error is at a ‘conv2’ of a _DenseLayer. I did not change anything within densenet121 as I intend to use it as a pretrained feature extractor. May I ask how do I check all the layers in this case? Thanks a lot!

def densenet121(pretrained=False, **kwargs):
    r"""Densenet-121 model from
    `"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`_
    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
    """
    model = DenseNet(num_init_features=64, growth_rate=32, block_config=(6, 12, 24, 16),
                     **kwargs)
    if pretrained:
        # '.'s are no longer allowed in module names, but pervious _DenseLayer
        # has keys 'norm.1', 'relu.1', 'conv.1', 'norm.2', 'relu.2', 'conv.2'.
        # They are also in the checkpoints in model_urls. This pattern is used
        # to find such keys.
        pattern = re.compile(
            r'^(.*denselayer\d+\.(?:norm|relu|conv))\.((?:[12])\.(?:weight|bias|running_mean|running_var))$')
        state_dict = model_zoo.load_url(model_urls['densenet121'])
        for key in list(state_dict.keys()):
            res = pattern.match(key)
            if res:
                new_key = res.group(1) + res.group(2)
                state_dict[new_key] = state_dict[key]
                del state_dict[key]
        model.load_state_dict(state_dict)
    return model
class _DenseLayer(nn.Sequential):
    def __init__(self, num_input_features, growth_rate, bn_size, drop_rate):
        super(_DenseLayer, self).__init__()
        self.add_module('norm1', nn.BatchNorm2d(num_input_features)),
        self.add_module('relu1', nn.ReLU(inplace=True)),
        self.add_module('conv1', nn.Conv2d(num_input_features, bn_size *
                        growth_rate, kernel_size=1, stride=1, bias=False)),
        self.add_module('norm2', nn.BatchNorm2d(bn_size * growth_rate)),
        self.add_module('relu2', nn.ReLU(inplace=True)),
        self.add_module('conv2', nn.Conv2d(bn_size * growth_rate, growth_rate,
                        kernel_size=3, stride=1, padding=1, bias=False)),
        self.drop_rate = drop_rate

    def forward(self, x):
        new_features = super(_DenseLayer, self).forward(x)
        if self.drop_rate > 0:
            new_features = F.dropout(new_features, p=self.drop_rate, training=self.training)
        return torch.cat([x, new_features], 1)
class DenseNet(nn.Module):
    r"""Densenet-BC model class, based on
    `"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`_
    Args:
        growth_rate (int) - how many filters to add each layer (`k` in paper)
        block_config (list of 4 ints) - how many layers in each pooling block
        num_init_features (int) - the number of filters to learn in the first convolution layer
        bn_size (int) - multiplicative factor for number of bottle neck layers
          (i.e. bn_size * k features in the bottleneck layer)
        drop_rate (float) - dropout rate after each dense layer
        num_classes (int) - number of classification classes
    """

    def __init__(self, growth_rate=32, block_config=(6, 12, 24, 16),
                 num_init_features=64, bn_size=4, drop_rate=0, num_classes=1000):

        super(DenseNet, self).__init__()

        # First convolution
        self.features = nn.Sequential(OrderedDict([
            ('conv0', nn.Conv2d(3, num_init_features, kernel_size=7, stride=2, padding=3, bias=False)),
            ('norm0', nn.BatchNorm2d(num_init_features)),
            ('relu0', nn.ReLU(inplace=True)),
            ('pool0', nn.MaxPool2d(kernel_size=3, stride=2, padding=1)),
        ]))

        # Each denseblock
        num_features = num_init_features
        for i, num_layers in enumerate(block_config):
            block = _DenseBlock(num_layers=num_layers, num_input_features=num_features,
                                bn_size=bn_size, growth_rate=growth_rate, drop_rate=drop_rate)
            self.features.add_module('denseblock%d' % (i + 1), block)
            num_features = num_features + num_layers * growth_rate
            if i != len(block_config) - 1:
                trans = _Transition(num_input_features=num_features, num_output_features=num_features // 2)
                self.features.add_module('transition%d' % (i + 1), trans)
                num_features = num_features // 2

        # Final batch norm
        self.features.add_module('norm5', nn.BatchNorm2d(num_features))

        # Linear layer
        self.classifier = nn.Linear(num_features, num_classes)

        # Official init from torch repo.
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight)
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)
            elif isinstance(m, nn.Linear):
                nn.init.constant_(m.bias, 0)

    def forward(self, x):
        features = self.features(x)
        out = F.relu(features, inplace=True)
        out = F.avg_pool2d(out, kernel_size=7, stride=1).view(features.size(0), -1)
        out = self.classifier(out)
        return out

I can’t find the error. I used your code to create CustomNet and passing an instance of your DenseNet121 to it and it works without an error.

Could you post a small executable code snippet to reproduce the error?

Thanks for the help. It turns out it’s i didn’t do tensor.to()…

Good you figured it out! Where was the missing to() call? Haven’t found it so far.

For some reasons, it works when I add the to() call for the input data during feedforwarding…a bit confused on why it works

Did you forget to assign the tensor back?

tensor = tensor.to('cuda')

yes exactly…it was a stupid mistake