Saving/Loading Variable sized tensors in state_dict

Hello forum,
I’m trying to use pytorch to store a variable length array inside the state_dict so the array is persistent.

First I register the buffer like in the batch normalization layer to add the array to the state_dict:
self.register_buffer(‘train_loss_data’, torch.tensor([], requires_grad=False))

During training I add an item every checkpoint to the tensor:
model.train_loss_data = torch.cat([model.train_loss_data, torch.tensor([train_loss]).to(device)])
torch.save(model.state_dict(), ‘./models/’ + model.name + ‘.ckpt’)

Then when I go to load the best checkpoint I get a dimensions not equal error.
model.load_state_dict(torch.load(’./models/’ + model.name + ‘.ckpt’))

RuntimeError: Error(s) in loading state_dict for mlp:
While copying the parameter named “train_loss_data”, whose dimensions in the model are torch.Size([21]) and whose dimensions in the checkpoint are torch.Size([8]).

I could achieve this functionality with tensorflow using the following validate_shape flag:
graph.train_acc = tf.Variable([], trainable=False, name=‘train_acc’, validate_shape=False)

Is there any way with the pytorch loader/saver to achieve what I want to do?

I was able to work around it by overloading load_state_dict() with the following:

def load_state_dict(self, state_dict, strict=True):
self.train_loss_data.resize_(state_dict[‘train_loss_data’].shape)
super().load_state_dict(state_dict, strict)