Create embedding using torch.randn with requires_grad=True

If I create a random initialized embedding using torch.randn((vocab_size, depth), requires_grad=True),
Will pytorch save it to disk automatically after each training epoch is done?
Will pytorch load it from disk rather than initializing another random embedding ?

The modules and state_dicts won’t be saved automatically.
You would have to save and restore them.
Have a look at the serialization semantics on how to do that.