Hi,
I’m stuck in a bug, actually I want to use pretrained vgg16 and connect each convolution layer to another convolution layer. My code has some bug but I don’t know how to fix it.
The following are some relative codes.
This line is defined in __init__()
.
self.conv_down = nn.Conv2d(input_channel, 21, kernel_size=1)
The following lines are defined in forward()
for layer in self.stage:
if isinstance(layer, nn.Conv2d):
x = nn.Sequential(layer)(x) #This line should let x pass exact one layer
out = self.conv_down(x)
out_for_conv.append(out)
else:
x = nn.Sequential(layer)(x)
self.stage
is like stage one I extracetd from pretrained vgg16. It should like [Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)), ReLU (inplace), Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)), ReLU (inplace), MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))]
Is the line to pass one layer right ? Or a more accurate to do this job?
Caffe have params like lr_mult
and decay_mult
. Does PyTorch have them?