Equivalent of `register_buffer` with torch.optim.Optimizer

Hi All,

I was wondering if there’s an equivalent toregister_buffer for custom optimizers? I’m currently building a custom optimizer and I want to be able to save some floats (which are used within the optimization process)

Some methods within the class don’t use self.param_groups so is there a way to save attributes within the torch.optim.optimzier such that if I were to use load_state_dict / save_state_dict it would save those float values?

Thank you!

You should be able to add your custom buffers to the corresponding param_group via:

# setup
lin = nn.Linear(1, 1)
optimizer = torch.optim.Adam(lin.parameters(), lr=1e-3)

# dummy training
out = lin(torch.randn(1, 1)).backward()
optimizer.step()

# check param_group
print(optimizer.param_groups[0])

# register custom tensor
optimizer.param_groups[0]['my_buffer'] = torch.tensor(100)
print(optimizer.state_dict())

# save state_dict and restore
sd = optimizer.state_dict()
optimizer = torch.optim.Adam(lin.parameters(), lr=1e-3)
print(optimizer.state_dict())
optimizer.load_state_dict(sd)
print(optimizer.state_dict())

# dummy training
out = lin(torch.randn(1, 1)).backward()
optimizer.step()
1 Like