Zeroing certain weights in the classifier before weights update

Before updating the classifier’s head, I want to zero the grad of certain weights in the classifier. I am trying to implement a cross_entropy by first selecting a subset of nodes in the classifier (indexing them using the batch labels). I want to update those nodes’ connections and avoid changing the gradient of the rest of the weights in the classifier. My task is continual learning. In each step, I have a dataset with a subset of classes of the main dataset.
After implementing the code for freezing the weights, I am getting meaningless results (worse and different than the time I was not freezing the weight). My hunch is that maybe I am indexing the weight matrix wrong.
This is the code.

    if self._args.freeze_classifier_weights is True:
            unq_task_targets = torch.unique(example_dict['task_labels'][0], sorted=True)
            
            
            row_indexes = list(set(self.classes) - set(unq_task_targets.tolist()))
            row_indexes_tensor = torch.tensor(row_indexes)
            for i, param in enumerate(self._model_and_loss._model.resnet.fc.parameters()):
                        #  we have 3 values in the parameters list: [torch.size([512]), torch.size([170,512]), torch.size([10])]
                        if i==1: #(we need the weights and not the biases therefore i=0)
                            param.grad[row_indexes_tensor,:] = torch.zeros_like(param.grad[row_indexes_tensor,:])

I appreciate any help and guidance on this issue.