Compare two layers of the same model

I’m writing a code for an AutoEncoder and testing the relevance of each layer using a function.
After calculating the relevance of each layer, how can I compare them and prune the one with the lower relevance??

Thank you

Are you attempting to prune the weights of each layer or each layer as a whole?
IIRC “The Lottery Ticket Hypothesis” paper provides a good overview of common methods for weight pruning;
1803.03635v1.pdf (
Though I’m not sure how well these would generalize to pruning an entire layer all at once (something iterative might still be needed).

yes I am looking for pruning the whole layer

But do you know how I can compare two layers starting from the last layer in the model, that will be helpful.

Thank you

In a feedforward model, it might be difficult to describe the relative importance of layers if they are in sequence because, the learned representations can be different (e.g., at different levels of abstraction such as simple patterns vs. high level concepts). It’s more common to compare the relevance of two layers in parallel because one can directly weigh the contribution of each to the output—but it sounds like you want to compare two layers in sequence.

In this case, a hack might be to use a ResNet-style model with identity or skip connections between certain layers, and to do an ablation study. For example, if you already know your desired metric (e.g., accuracy) you can simply ablate layers with this style of architecture and observe how the accuracy is affected (possibly with some finetuning after truncating the model).

Thank you for your response
I have done the ablation study on the model and got the results, but know I want to compare it with a dynamic algorithm that prune irrelevant layers according to some criteria

Thank you so much
I’ll keep looking for it.