Best way to make a tensor not part of checkpoint

Hi,

What is the best way to prevent a tensor from getting saved to and getting loaded from checkpoint ?
I want to provide explicitly the value of the tensor each time I instantiate my model.

I think the easiest way would be to just save and load the complete state_dict and implement a function that resets the specific tensor, e.g.:

model = Net()
model.load_state_dict(torch.load(PATH))
model.reset()

Would resetting work if the new value to be used has different shape than the one in the checkpoint?

In that case you would have to assign a new nn.Parameter.
Could you explain your use case a bit? Maybe there are some other and better ways.

Sure, I have a model that has nn.Embedding, the value of which I pass explicitly. My data changes from time to time, so does the embedding matrix and its shape. I want this tensor to be not part of the checkpoint.