Extracting layers of a subclassed pretrained model in PyTorch

After loading a pre-trained model, I am able to extract its weights and biases in addition to other information, such as layer names–if exist, but not information included in the forward function. Also printing the model architecture, print(model) prints the layers in the order they were provided inside __init__ function of the Class(nn.Module) but not their actual order.

Is there a way to access the information that was provided in the forward function but not modeled as layers, such as F.relu ?

Here’s a code snippet:
import torch
import torch.nn.functional as F

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc2 = nn.Linear(128, 10)
        self.dropout2 = nn.Dropout2d(0.5)
        self.fc1 = nn.Linear(9216, 128)

    def forward(self, x):
        x = self.fc1(x)
        x = F.relu(x)
        x = self.dropout2(x)
        x = self.fc2(x)
        output = F.log_softmax(x, dim=1)
        return output

model = Net()

And the output is:

  (fc2): Linear(in_features=128, out_features=10, bias=True)
  (dropout2): Dropout2d(p=0.5, inplace=False)
  (fc1): Linear(in_features=9216, out_features=128, bias=True)

I would like to (1) print the layers in order and (2) learn about the ReLU and where they were applied (this applied to other functions, such as maxpool).

My current solution is to build the model using Sequential, but what about models that are already trained but do not have implementation details (model code not available)?

Thank you!