Absent ReLU Layers in Pretrained GoogleNet?

Hello,

I want to use the pretrained GoogleNet, but when I compared it to the original paper cited in the PyTorch, I noticed that it was missing ReLU layers.

According to the paper, “All the convolutions, including those inside the Inception modules, use rectified linear activation.” But when I run the following commands and view the output:

cnn = models.googlenet(pretrained=True)
print(cnn)

I get (only first 3 convolutional layers shown):

GoogLeNet(
  (conv1): BasicConv2d(
    (conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
    (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  )
  (maxpool1): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
  (conv2): BasicConv2d(
    (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  )
  (conv3): BasicConv2d(
    (conv): Conv2d(64, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  )

When I run the equivalent commands for AlexNet and look at the architecture, I can see dedicated ReLU layers (only first 3 convolutional layers shown).

AlexNet(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2))
    (1): ReLU(inplace=True)
    (2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
    (3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
    (4): ReLU(inplace=True)
    (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
    (6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (7): ReLU(inplace=True)

Would anyone be able to tell me why these ReLU layers are absent or if I am doing something wrong? If it’s not ReLU, then what is the activation function being used and how is it achieved?

Thank you in advance!

Based on the code the functional API is used via F.relu which is why these modules do not show up in print(model).

I see, that makes sense. Thank you so much!