How to modify the input channels of a Resnet model

I’m new to Pytorch.
I want to input a 4-channel tensor into a Resnet model, but the channel numbers of default input is 4.
Any one teach me how to realize this modification, please?

3 Likes

Hi, if you’re still interested:

https://discuss.pytorch.org/t/transfer-learning-how-to-modify-the-first-conv2d-layer-of-alexnet-to-accomodate-for-9-channel-input/4063

I’m currently trying it now to make it use for medical images that may have 18 slices thus 18 channels of grayscale images.

2 Likes

I’m facing the same problem. Does it work?

1 Like

I want a 2-channel resnet, is it possible with resnet18() for instance? and How?

Have you done this ? (how)

1 Like

so how did you do in the end?

I am not sure since I have done it long ago, but I think I have done something like:

model = torchvision.models.resnet18()
model.conv1 = nn.Conv2d(num_input_channel, 64, kernel_size=7, stride=2, padding=3,bias=False)

I hope it can help.

1 Like

I did this way - search for ‘resnet 18’.

All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225] .

Here is a generic function to increase the channels to 4 or more channels.
One key point is that the additional channel weights can be initialized with one original channel rather than being randomized.

new_in_channels = 4
model = models.resnet18(pretrained=True)

layer = model.conv1
        
# Creating new Conv2d layer
new_layer = nn.Conv2d(in_channels=new_in_channels, 
                  out_channels=layer.out_channels, 
                  kernel_size=layer.kernel_size, 
                  stride=layer.stride, 
                  padding=layer.padding,
                  bias=layer.bias)

copy_weights = 0 # Here will initialize the weights from new channel with the red channel weights

# Copying the weights from the old to the new layer
new_layer.weight[:, :layer.in_channels, :, :] = layer.weight.clone()

#Copying the weights of the `copy_weights` channel of the old layer to the extra channels of the new layer
for i in range(new_in_channels - layer.in_channels):
    channel = layer.in_channels + i
    new_layer.weight[:, channel:channel+1, :, :] = layer.weight[:, copy_weights:copy_weights+1, : :].clone()
new_layer.weight = nn.Parameter(new_layer.weight)

model.conv1 = new_layer

This can be modified to work with one or two channels as well.

5 Likes

this will not work for autograd backward function… :disappointed_relieved:

I confirm this works. Thanks Pierre