When using the same initialization, do you get different results?

I am tring to understand how torch.manual_seed() works/
A model is trained with two different seeds (1, 2). As an initialization, I used the weight from a pre-trained model.
All dataset samples are used to train the model. Sequential samples are used by the dataloader. However, I still got different results with different seeds.
Doesn’t the seed control weight initialization or data loading? Is there anything else that seed is used to control?

The seed is used in the pseudo-random number generator (PRNG) for all subsequent calls sampling random numbers. Since you are using pretrained weights, seeding the code will not have any effect on it.
However, if your model uses any other kind of random operation (e.g. Dropout layers) the seed will change their behavior.

What if I freeze the model parts that have dropout layers and only train the final classifier layers? Is it still possible to get variant results if I use different seeds?

You could disable dropout layers by calling .eval() on them, which would get rid of the random behavior. Other layers could still use random operations so you would need to check them.