Changing requires_grad means creating new optimizer?

From this link, I read the thread about unfreezing during the training phase. However, in my case I want to freeze some parameter after some epochs.
First, I have a nn.ParameterDict that includes a set of nn.Parameter parameters with specific key. First of all, whole of nn.ParameterDict parameter is trainable. Then after some epochs, during a for loop, I set the required_grad to False for some of the nn.Parameter parameters in nn.ParameterDict parameter. However, I see that they are still fine-tuned/trained while their required_grad is actually False.
Should I define new optimization when freeze some parameters?
Thank you