Question about torch.load()

Hello, I want to ask a question about torch.load(), I found in same process, if I call torch.load many times.
then 1st time may cost much time, then it will cost too little time.
e.g.
model1 = torch.load() 3.2s
model2 = torch.load() 0.03s
model3 = torch.load() 0.03s

does any one can explain what does pytorch do in first time. and if I call torch.load() 3 times. what is the memory arrangement, will memory cost 3 times than call torch.load once?
Thank you!

What PATH arguments do you use for torch.load(PATH) with your respective models? Is this just a test script with three times the same checkpoint or different checkpoints?

In the first case, either python or PyTorch is probably clever enough to not reload the model if it’s the same checkpoint.

In the second case, maybe it’s the first time you call a torch function, and it initializes some things, which are not required in the next calls?

in the first case, even I load same checkpoint, actually the memory is increase, so I think maybe there is new instance create.
in the case, maybe it do initialized jobs, do you know detail info about this initialized process?

How much does memory increase? Did you try with 3 separate checkpoint files, with different weights?

In any case, I’m not familiar with the internals of PyTorch, these are mainly exploratory questions so that I can learn more about PyTorch thanks to your situation :slight_smile: It just seems logical that there would be some initialization at some point for such a complex framework.

when I load in cpu case, the memory will increase about 2M when I call torch.load() every time, I think that is new instance’s memory.
e.g. 1st time 0->198.75M
2nd time 198.75M->202.59M
3rd time 202.59M->204.91M