I was wondering if there’s an equivalent toregister_buffer for custom optimizers? I’m currently building a custom optimizer and I want to be able to save some floats (which are used within the optimization process)
Some methods within the class don’t use self.param_groups so is there a way to save attributes within the torch.optim.optimzier such that if I were to use load_state_dict / save_state_dict it would save those float values?