NAS training dynamic parameters update?

I am training a NAS network containing a controller and a model. For each training epoch, the controller dynamically configures the model network to optimize model’s reward. Since this is NAS training, the search space is relatively large, and the model itself is very large. However, 99% of the time, many parameters will not be activated. My question is, is it possible for me to change my optimizers (an AdamOpt) on the fly to only update parameters that were activated at the epoch? My hope is that it decreases overall training time.