State_dict() round trip fails

I tried the following:

torch.save(net.state_dict(), f"file")
weights = torch.load('file1', map_location=device)

net1 = Net()
net1.load_state_dict(weights)

net2 = Net()
net2.load_state_dict(weights)

torch.save(net1.state_dict(), f"file1")
torch.save(net2.state_dict(), f"file2")

Then I find that file1 and file2 are different. How can it be? My goal is to find out if the weights of a net has changed since last time.

Are you seeing different parameters in file1 and file2 after reloading them?
If so, could you post the model architecture so that we could reproduce this issue?

I am seeing the same parameters. In fact, I figured out. torch.save appears to add some randomness to the byte representation it writes out, similar to salting.