Freezing weights on part of a layer, how to?

I understand that I can freeze weights in an entire layer.

Can i freeze a selected part of a layer while leaving some filters active?

I don’t want to override weights or gradients. I would rather avoid the overhead of the gradient computation and the overriding step since I’m not going to use them.

is this possible? and how?

I’m not sure if this is possible. Would you be able to divide the layer into two pieces? One which includes the part you will eventually want to freeze and one that does not?

So I guess the answer is probably not.

Im not sure about splitting layers because the backprop still would have to go thru those neurons, only they would simply let the gradients pass unmodified; I would have to see how to rewire the graph. Even if that were possible, Im trying to do something dynamic - a bit like dropout but a. not random but assigned b. on gradients c. i dont want to kill the gradients that go there, just let them thru. I wonder if the overhead of rebuilding the layers is not more than flattening the gradients before backprop.

In my case ideally one would pass a mask to the forward graph that only computes gradients on some layers assigning a passthru to the others. But I dont know enough about the guts of pytorch or cuda to understand if that would break the parallel computations.
I guess I just will have to test to measure how costly this is.

thanks

Directly modify the gradient won’t add too much cost given that you choose the right way to implement it.

I was hoping to avoid the cost of computing the gradients when I didnt need to. Ill use as is for now. thanks.