Flattening the output of a hidden layer in a feed forward neural network

I need a neural network (fully connected) that takes a batch of images of size 100 (these images are output of an encoder with dimensionality of (10,)) in the form of a tensor so [100,10] and outputs a vecotor with the size 20. I first used this code:

def mlp(sizes, activation=nn.Tanh, output_activation=nn.Identity):
layers = []
for j in range(len(sizes)-1):
act = activation if j < len(sizes)-2 else output_activation
layers += [nn.Linear(sizes[j], sizes[j+1]), act()]
return nn.Sequential(*layers)

This gives me a tensor with the size [100,20].
Is it possible flatten the output of the last layer before nn.identity and then use two more layers [2000,100] and [100,20]?
I wonder if this is possible in a fully connected neural network as I have only seen nn.flatten being used in the last layers of CNNs.
I appreciate the help.

If I understand the use case correctly, you wouldn’t need to flatten the activation of the last layer, as it already is 2-dimensional (has the shape [100, 20]).
You could just add two more linear layers afterwards, where the first uses in_features=20.