Weight regularization on selected filters

At the start of each epoch, some of the filters were selected to be penalized with L2, if the filter selection procedure is in the training loop (loop on batch) everything is OK, but I want at the start of each epoch to select filters for regularization and then begin training , in this case I have popular error :
“RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of …”, why THIS happens any help appreciated, I know computation graph generated dynamically. my code

 crt=nn.CrossEntropyLoss()
 for epochIdx in range(self.NumOfEpoch):
            reg_params = Variable( torch.FloatTensor(0), requires_grad=True)
            for n,x in self.model.named_parameters():
                if some condition meet:
                   reg_params = torch.cat([reg_params,x[[1,2],:,:,:].view(-1)]) #index 1 and 2 is sample

            #if all filter selected with this line and above for loop commented IT WORK !!!
            #reg_params = torch.cat([x.view(-1) for x in self.model.parameters()])

            for batchIdx, data in enumerate(self.trainloader, 0):
                inputs, labels = data
                inputs, labels = inputs.to(self.device), labels.to(self.device)

                self.model.zero_grad()
                features,output = self.model(inputs)
                l2_regularization = Reg_lambda * torch.norm(reg_params, 2)
                loss=crt(output,labels) + l2_regularization
                loss.backward()
                optimizer.step()
                #....

After every optimizer.step(), the params would be updated. In this scenario, the params used on L2 regularization would be outdated. You have to do the selection inside for loop batch I think.

thank you for your reply.
yes, I know when selection is done in batch for loop it work, i tested this matter before,
However, due to performance/speed, I want to select the filters in epoch for loop,
I do not get what is the meaning of “…the params used on L2 regularization would be outdated…” pls explain it
also when all filter is in selection (in epoch for loop) with this code It works!!!

reg_params = torch.cat([x.view(-1) for x in self.model.parameters()])

I guess I misunderstood the question.

What kind of condition is this? Does it involve any operation with parameters?

I am not sure if there is an issue with this code.
Can you change it as follows and try?

reg_params = []
for n,x in self.model.named_parameters():
    if some condition meet:
        reg_params.append(reg_params,x[[1,2],:,:,:].view(-1)) #index 1 and 2 is sample

reg_params = torrch.cat(reg_params, dim=0)

It will be great if you could provide a compact runnable code to see whats the issue.

No condition does not have any parameters and operation.
Your suggestion does not work and has the error “Trying to backward through the graph a second time (or directly access saved tensors…”
I change the code to this and it works but I do not know why this works??
this line :

reg_params = torch.cat([reg_params,x[[1,2],:,:,:].view(-1)])

changed to

temp =[1,2]
for i in temp:
      reg_params = torch.cat([reg_params,x[i,:,:,:].view(-1)])
....

Thank you InnovArul