Freeze weight at test phase

I want to freeze convolution layers’ weights at test phase if the weights’ value is smaller than a threshold. How to implement this in Pytorch?

I am sorrry but what do you mean freezing weights at test phase? I think usually the weight will not change considering that there is no backward operation at test phase. The weight will only change when there is an optimizer together with a backward operation.

Thanks for reply. Maybe I expressed vaguely. I want the small weight be killed in test phase to save computing time.

It won’t save computing time unfortunately. Even if you have many zero entries, as long as it is a dense weight tensor, the computation is basically the same. This is true to most if not all DL frameworks.

I think what you want is pruning the network weights, with sparse matrix computing it can be Accelerated

Yes, you’re right. There are some mistakes in my previous statement. Can it be realized directly by Pytorch?

There is a huge amount of optimization to dense weight conv, especially if you use cudnn. So even if sparse weights work, they are likely slower than dense weights. Considering that the size of conv kernels is usually not very large, I don’t think this is a very worthy thing to do.

i also want ask this problem too. but I have no time to do pruning now. so if have any info. Plz also share to me.
Thank you!:yum:

I plan to prune weights off-line and reconstruct the network with identical size with weights