How to customize the "backward" function to not calculate gradients and not update a selected sub-tensor of a parameter tensor of a layer

Hi @ptrblck I have a question :

Why is it necessary to use torch.fx here ?

Is it possible to simply replace the selected layer with the customized one which will freeze the desired parameters ( using setattr(model, module_name, new_module)) without losing the weights in the transition?