Can I delete part of the loaded pretrain network that is not useful?

I have a pretrained network; i just want part of that and want to remove the rest that is not useful (to save memory)
how should i do that?
lets say i load my pretrained net as:

    load_name = os.path.join('data/pretrained_model/model_pretrained.pth')
    checkpoint = torch.load(load_name)
    model.load_state_dict(checkpoint['model'], strict=False)

Can I delete the part of network that is not useful/used?

I have a workaround solution. You can define a new class that has the components that you need. Then you create two models one for the old class, and one for the new class. Then you load the pre-trained model using an object of the old class, and then manually assign the weights and biases from the object of the old class to the object of the new class. Finally, you will only send the new model to the GPU.

I have a simple example if that helps:

# has 3 weight-layers
old_model = nn.Sequential(
    nn.Linear(200, 100),
    nn.Linear(100, 50),
    nn.Linear(50, 2),

# has 2 weight-layer
new_model = nn.Sequential(
    nn.Linear(200, 100),
    nn.Linear(100, 50)

# loas the pretrained model to old_model:

# assign the weights to the new_model
for i in [0, 2]:
    new_model[i].weight = old_model[i].weight
    new_model[i].bias = old_model[i].bias

# send new_model to GPU
1 Like