VGG 16 Architecture

Hello Forum,

I wanted to conduct some experiments by trying to tweak the architecture of VGG 16, to try get a sense of author’s intuition. And I am not able to find the code for the pytorch implementation of VGG 16.

I only find this link https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py where a .pth file for vgg is used to load vgg architecture.

Is there any way I can get the code of VGG-16 architecture or I will have to write myself the architectire that models.vgg16() shows.

Please do let me know.

I don’t really understand question.
There’s pytorch implementation for each VGG (with various depth) architecture on the link you posted.

Well you link contains the code if you look carefully.

If you call make_layers(cfg['D']) you will obtain a nn.Sequential object containing the feature extractor part of the VGG 16 model (so you can obtain every layers in the right order from this object).

Then for the classifier part, you will find it in the general VGG object definition here:

self.classifier = nn.Sequential(
            nn.Linear(512 * 7 * 7, 4096),
            nn.ReLU(True),
            nn.Dropout(),
            nn.Linear(4096, 4096),
            nn.ReLU(True),
            nn.Dropout(),
            nn.Linear(4096, num_classes),
        )

If you combine those two elements you will have the whole VGG 16 structure. Maybe it will be clearer that way.

Hope this helps.

I want pytorch code for this

VGG(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace)
(16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU(inplace)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace)
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU(inplace)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU(inplace)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace)
(30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(classifier): Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU(inplace)
(2): Dropout(p=0.5)
(3): Linear(in_features=4096, out_features=4096, bias=True)
(4): ReLU(inplace)
(5): Dropout(p=0.5)
(6): Linear(in_features=4096, out_features=1000, bias=True)
)
)

To try and make changes to the architecture, (kernel fileter some other activation like elu) and check its performance.

If I want to make change to any one of the layers, what code should I use.

copy past the source code from github and make some changes either in that part:

self.classifier = nn.Sequential(
            nn.Linear(512 * 7 * 7, 4096),
            nn.ReLU(True),
            nn.Dropout(),
            nn.Linear(4096, 4096),
            nn.ReLU(True),
            nn.Dropout(),
            nn.Linear(4096, num_classes),
        )

to modify the classifier or in the make_layer function for the feature extractor part.

But at this point you should just recode the network it won t be very long and way more clear.

1 Like

You can do that by modifying corresponding line on that link.
For example, if you want to change all the relu in the model architecture into elu,
replace nn.ReLU in line 31, 34, 50, 70, 72 of https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py to elu, then call the vggXX function to make model instance

1 Like

Oh thanks both, will do that.

But is there not any code like this one for pytorch,

It will reduce work, I am too lazy lol.

You already found the corresponding code in Pytorch… But indeed it is less straight forward. Anyway if you re lazy you just copy-past the right stuff it will take you 10min maximum to recode the whole model.

Good luck

Ya already half way through. Thanks anyway