I have used this solution. My batch size is 4, and I extracted the intermediate layer of my model.
I am getting the resultant output with shape(4,84). However, I am wondering 4 means that I only extract 1 batch activation value. How can I extract all batch activation values? Can you please help me out in this regard?
It seems you are passing the inputs to the linear layer as [batch_size, 4, 84] which could be interpreted as temporal data where the linear layer is applied to each time step (dim1).
Does this fit your use case as usually you would flatten the activation before passing it to the linear layer via:
I have already flattened the input before passing it to the linear layer.
flatten_values = torch.flatten(con_2_output, 1)
after flattening, these are the three linear layers, and I want to extract the last layer’s feature activation values. Do I need to flatten again before fetching the values from 2 or third linear layer?
Something seems to be off then, since the flattened activation should have two dimensions instead of [images, 4, 10]. Add a print statement to the forward method of your model and check why the activation is 3-dimensional.
No, you do not need to flatten the activations again between the linear layers, as the outputs should already have 2 dimensions.
After extracting the activations, I am converting them into a NumPy array. So before converting into the Numpy, the shape is, let’s say (80,10), and after converting into NumPy, it is (1, 80, 10). Why is it so?
np.asarray doesn’t change the shape but stacks the list of 100 numpy arrays of the shape [100, 64, 1, 1] into a single array.
Here is an example with other shapes which might be easier to understand:
You should not ignore these values and try to figure out why you are using a list of arrays while it seems you would expect to use a single array instead.