How to freeze the vector at the second last layer of shallow model?

Hi there I am using a shallow NN, having 500 neurons in my hidden layer. I have a question , Can I save the coefficients of this layer after training. I need to use these coefficients for further classification/

Plz guide

If you mean the weight parameter (and bias) as “coefficients”, then yes: you can access them via linear_layer.weight and linear_layer.bias (you might need to change the linear_layer name to the object name) or access the parameters in their state_dict via linear_layer.state_dict().

No sir Not weight but the resultant output after activation function, Maybe similar to transfer learning. For example, a pre-trained network is used first to fine-tune the model, then the second last layer output needs to be saved for further classification using SVM or HMM etc.
I am following this paper

Regards

You can get the forward activations directly in the model’s forward method e.g. via:

def forward(self, x):
    act1 = self.layer1(x)
    act2 = self.layer2(act1)
    ...
    out = self.last_layer(actX)
    return out

and return it directly as an additional output.
Alternatively, you could also use forward hooks to grab intermediate activations from specified layers as described here.

If you are only interested in the penultimate activation, you could also replace the last layer with nn.Identity without changing the forward at all.