Hello together,
lets say we have a ReLU-NN
class and for the training phase a training_backend
class which handles all the optimization data and so on. Within the training_backend
__init__
method there are the following assignments:
self.model = model # a NN-model of class ReLU-NN
self.optimizer = torch.optim.LBFGS(self.model.parameters(),...)
def closure():
...
nn_output = self.model(self.mu_tr)
...
return loss
self.closure = closure
The training_backend
has also the following method to train the assigned model:
def optimize_grad_control(self, maxIt, gradIt=5):
...
self.optimizer.step(self.closure)
...
Now imagine that i have trained the model and after that i have created a new model of the ReLU-NN class. This new model is then assigned to the training_backend
with
def new_model(self, new_model):
# assign the new model
self.model = new_model
My question is when i want to train the new model with the optimize_grad_control
method do i have to reassign a new optimizer and define a new closure function? (I have read somewhere that there is no possibility to dynamically change the model.parameters()
within the optimizer.)