Modify forward of pretrained model

I am trying to rewrite inference part so as to also return hidden layer activations like embeddings.

is this correct implementation, any potential issues with this ?

class FeatureExtration(torch.nn.Module):
    def __init__(self, pretrained_model):
        super().__init__()
        self.__dict__ = pretrained_model.__dict__.copy()
        
    def forward(self,x):
        
        #... copy original forward ...
     
        return proba, embedings, some_other_layer


model2 = FeatureExtration(pretrained_model)

If you would like to return additional values in the forward method, but keep the model at it was otherwise, you could directly derive your class from the base model:

class FeatureExtractedModel(BaseModel)

This would make the implementation cleaner in my opinion.