How to just pass one layer in pretrained model?

Hi,

I’m stuck in a bug, actually I want to use pretrained vgg16 and connect each convolution layer to another convolution layer. My code has some bug but I don’t know how to fix it.

The following are some relative codes.

This line is defined in __init__().
self.conv_down = nn.Conv2d(input_channel, 21, kernel_size=1)

The following lines are defined in forward()

for layer in self.stage:
    if isinstance(layer, nn.Conv2d):
        x = nn.Sequential(layer)(x)    #This line should let x pass exact one layer
        out = self.conv_down(x)
        out_for_conv.append(out)
    else:
        x = nn.Sequential(layer)(x)

self.stage is like stage one I extracetd from pretrained vgg16. It should like [Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)), ReLU (inplace), Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)), ReLU (inplace), MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))]

Is the line to pass one layer right ? Or a more accurate to do this job?

Caffe have params like lr_mult and decay_mult. Does PyTorch have them?

What do you mean by pass one layer: do you mean by skip one layer?

x = nn.Sequential(layer)(x) 
# is an identity operation. It is the same as x = layer(x)

@smth You can think this. However, I met another error called RuntimeError: tensors are on different GPUs

class _make_layer(nn.Module):
        def __init__(self, stage, input_channel):
                super(_make_layer, self).__init__()
                self.conv_down = nn.Conv2d(input_channel, 21, kernel_size=1)
                self.conv_add = nn.Conv2d(21, 1, kernel_size=1)
                self.stage = stage

        def forward(self, x):
                out_for_conv = []
                for layer in self.stage:
                        if isinstance(layer, nn.Conv2d):
                                x = layer(x)    #This line causes the error
                                out = self.conv_down(x)
                                out_for_conv.append(out)
                        else:
                                x = layer(x)
                out = out_for_conv[0]
                for idx in range(1, len(out_for_conv)):
                        out += out_for_conv[idx]
                out = self.conv_add(out)
                return x, out
class Model(nn.Module):
        def __init__(self):
                super(Model, self).__init__()
                self.vgg16_features = list(models.vgg16(pretrained=True).features.children())
                self.feature_stage1 = self.vgg16_features[0:5]
        def forward(self, x):
                out = []
                # stage1
                x1, out_stage1 = _make_layer(self.feature_stage1, 64)(x)
                out.append(out_stage1)

This is my code, I just use one gpu, but why it said that I have used more GPUs?