Inplace Operations when using Pyrfm

 from pyrfm import OrthogonalRandomFeature,CompactRandomFeature,RandomFourier,FastFood

 class Kernel_Mapping(torch.autograd.Function):
    @staticmethod
    def forward(ctx, input):
          transformer = RandomFourier(n_components=10,
                                kernel='rbf',
                                use_offset=True, random_state=0)
    

    
    inputs = input.detach().numpy()
    #inputs = input.numpy()
    inputs_trans = transformer.fit_transform(inputs)
    ## Do I want to save the transformed for backward?
    ctx.save_for_backward(input)
    return torch.as_tensor(inputs_trans, dtype=input.dtype)

@staticmethod
def backward(ctx,g):
    return g

 loader = DataLoader(dataset, batch_size=len(dataset), shuffle=False)
generator = torch.Generator()
#device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
#model = model.to(device)
for inputs, targets in loader:
    sensitive_attributes = (inputs[:, sensitive_attribute_idx])[:, None]
    inputs = drop_attribute_tensor(inputs,40)
    inputs = inputs.to(device)
    
    targets = targets.to(device)
    print(f'Local Contribution {type(inputs)}')
    #import pdb;pdb.set_trace()
   
    #targets = targets
    #inputs = inputs.1
    outputs = model(inputs)
    #outputs = model(inputs[:,:-1])
    Z_outputs= Kernel_Mapping.apply(outputs.clone())
    Z_sens_attr  = Kernel_Mapping.apply(sensitive_attributes)

    
    ## Are they sampled independently?
    phi_hat = torch.normal(0., 1.0/R, (len(dataset), R), generator=generator.manual_seed(random_seed))
    omega_hat = torch.normal(0., 1.0/T, (len(dataset), T), generator=generator.manual_seed(int(random_seed/2)))

    phi_sens_attr = phi_hat.T@Z_sens_attr
    phi_outputs = phi_hat.T@Z_outputs
    omega_sens_attr = omega_hat.T@Z_sens_attr
    omega_outputs = omega_hat.T@Z_outputs


 inner_arg_sens = matrices['phi_s']@ matrices['omega_s'].T
  inner_arg_output = matrices['omega_f'] @matrices['phi_f'].T
            
            inner_arg = inner_arg_sens @ inner_arg_output
            fair_loss = torch.trace(inner_arg)
            
            fair_loss = (params["fairness_weight"]/( (len(train_dataset) -1)**2) ) * fair_loss
            
            
            #reg_loss = (1.0)/(2.0*step_size) * (model_l2(subtract_models(client_model, global_model)))
            
            loss = loss_func(outputs, targets) + params["fairness_weight"]/( (len(train_dataset) -1)**2)  * torch.trace(inner_arg)
            
            #overall_loss +=loss.item()
            f_loss += fair_loss.item()
            #torch.autograd.set_detect_anomaly(True)
            loss.backward(retain_graph = True)



The error:   RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [32, 1]], which is output 0 of AsStridedBackward0, is at version 146; expected version 145 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

The Runtime error seems to be when I use feature_mapping on outputs. When I use :
Z_outputs= Kernel_Mapping.apply(outputs.detach()) I see no error but that
is wrong.

Hi Huzaifa!

The likely immediate cause of your inplace-modification error is your use of
retain_graph = True. Try removing it.

Note that optimizer.step() counts as an inplace operation, so you normally
don’t want the “retained graph” still around when you call it.

If you think that you actually do need retain_graph = True, make sure you
that you understand why so that you can either avoid it or arrange things to
avoid the inplace-modification error.

I don’t really understand what your code is doing – you can find some suggestions
for debugging inplace-modification errors in this post:

Good luck!

K. Frank

Thanks K.Frank you see I do need to retain the graph But I have tried removing it and the issue still persists.Also, do you know why a NN would cause issue but not Logistic Regression in this case.

Hi Frank so I figured out the section of code that was giving me error.Can you help me explain this error?

Hi Huzaifa!

Why?

In that case, you might consider removing it (even if that isn’t correct for your
use case) and debugging your inplace-modification error in this somewhat
simpler setting.

As a general comment not necessarily relevant to your issue, whether or not
an inplace modification triggers an error depends on the details of how the
modified tensor is used in the computation graph. Perhaps you have a modified
tensor in both cases but your “Logistic Regression” doesn’t use that modified
tensor in a way that leads to inplace-modification error.

Best.

K. Frank

Hi Huzaifa!

It’s pretty hard to tell what’s going on with your code. Not only has the code
you originally posted been edited into something completely different (and the
title of the post has completely changed), but most importantly, the error you
quote:

is quite different from the one you originally posted. This suggests that you
haven’t figured out the (only) section of code that is giving the error.

For reference, the original error you posted was:

(Different tensor shape, different backward operation, and different version
numbers.)

Have you tried any of the debugging techniques I suggested in the post I linked
to in my first reply?

My suggestion would be to figure out / guess which tensor is being modified
based on its shape, and then use a divide-and-conquer strategy based on the
._versions of the suspect tensor to located where it is modified inplace. (And
turn off retain_graph = True if doing so doesn’t make the error go away just
to simplify the situation somewhat.)

Best.

K. Frank

Yes I removed retain_graph = True in the code and completly reimplemented it.Now I get no such error and my code works.Thanks