Changing a Linear layer in pretrained model changes activation

Hello. I am using a pretrained model, but I want to change the final layer Linear to accompany a different number of classes. But it seems that while originally there was no activation associated with it (i.e. f(x)=x), after changing it the activation becomes hyperbolic tangent (f(x)=tanh(x)). Here is the code

import torch
from torchvision import models


model = models.mobilenet_v2(pretrained=True)
# last layer(s) defined in model.classifier

print(model.classifier)
# Sequential(
# (0): Dropout(p=0.2, inplace=False)
# (1): Linear(in_features=1280, out_features=1000, bias=True)
# )

x = torch.randn((1, 3, 224, 224))
y = model.forward(x)

print(y.min().item(), y.max().item())
# -3.4289, 2.9577

# change the last layer to predict 3 classes
model.classifier[1] = torch.nn.Linear(in_features=1280, out_features=3, bias=True)

y = model.forward(x)

print(y.min().item(), y.max().item())
# -0.185, 0.6268

Repeating this shows that new classifier always predicts ‘y’ to be in range [-1, 1]. What’s happening here?

I think if we change the input, then we will get values outside of [-1, 1] range.
I was able to get greater than 1 value,

x = torch.randn(1, 3, 224, 224)
model.classifier[1] = torch.nn.Linear(in_features=1280, out_features=3, bias=True)
y = model.forward(x)
print(y.min().item(), y.max().item())

-0.0004289243370294571 1.1706701517105103

Weird. I am even putting the y = model.forward(x) inside a loop and never seem to get values outside [-1, 1]…