You could load the pretrained weights and just call the desired init method on the last two layers:
model = ...
model.load_state_dict(torch.load(...)) # or your equivalent code using torch.load
with torch.no_grad():
torch.nn.init.xavier_uniform_(model.fc1.weight)
torch.nn.init.zeros_(model.fc1.bias)
...
In the posted code snippet you won’t really need it.
It was just a security measure in case you performed some operations before loading the weights, so in case you initialize the parameters right after loading the model, you can just remove it.