New layer parameters initialization for pre-trained model

I want to modify the pre-trained model and train it.
For example, I use the resnet50 model in torchvision.
I cancel the last fc layer
Then I use a new transposed convolution layer and a new maxpool layer and a new classifier layer.

My question: are these three new layers’ parameters already initialized?
Or I need to initialize them with method like “Xavier” ?

I checked the values of these three layers, and they are not zero. I don’t know whether they are already initialized.
And why they are not equal to zero? Where do these values come from?

Thanks for your help!

Initializing new layers with new method will override any value present, if it exists.

import torchvision.models as models
import torch.nn as nn

class Net(nn.Module):
    def __init__(self , model):
        super(Net, self).__init__()
        self.resnet_layer = nn.Sequential(*list(model.children())[:-2])
        self.transion_layer = nn.ConvTranspose2d(2048, 2048, kernel_size=14, stride=3)
        self.pool_layer = nn.MaxPool2d(32)  
        self.Linear_layer = nn.Linear(2048, 8)
    def forward(self, x):
        x = self.resnet_layer(x)
        x = self.transion_layer(x)
        x = self.pool_layer(x)
        x = x.view(x.size(0), -1) 
        x = self.Linear_layer(x)
        return x

res = models.resnet50(pretrained=True)

#model = Net(res)
#fc_features = res.fc.in_features

#res.fc = nn.Linear(fc_features, 10)


I use this code and I don’t know which part do the initialization.
The three new layers are not exist in the pre-trained model.