Is it possible to specify a name for each layer when creating the model?

I couldn’t see anything in the layer documentation, but I wonder if it is possible to have a name for a layer at all

Maybe you can use the ModuleDict for naming the layers. I have a simple example:

import torch.nn as nn

class MyModule(nn.Module):
    def __init__(self):
        super(MyModule, self).__init__()
        self.model = nn.ModuleDict({
                'conv1': nn.Conv2d(1, 8, 3, padding=1),
                'conv2': nn.Conv2d(8, 16, 3, padding=1),
                'conv3': nn.Conv2d(16, 32, 3, padding=1),

                'pool1': nn.MaxPool2d(2),
                'pool2': nn.MaxPool2d(2),

                'activ1': nn.ReLU(),
                'activ2' : nn.ReLU(),

                'fc': nn.Linear(512, 100),
                'sigmoid': nn.Sigmoid(),
        })

    def forward(self, x):
        for name in ['conv1', 'pool1', 'activ1',
                     'conv2', 'pool2', 'activ2',
                     'conv3']:
            x = self.model[name](x)
        x = x.view(-1, 512)
        x = self.model['fc'](x)
        x = self.model['sigmoid'](x)
        return x

Now, you can create a model and print it to see the names of different layers:

>>> m = MyModule()
>>> print(m)

MyModule(
  (model): ModuleDict(
    (activ1): ReLU()
    (activ2): ReLU()
    (conv1): Conv2d(1, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (conv2): Conv2d(8, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (conv3): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fc): Linear(in_features=512, out_features=100, bias=True)
    (pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (sigmoid): Sigmoid()
  )
)

Alternative Solution based on @Shisho_Sama’s suggestion:

Since the ModuleDict does not preserve the order of elements, we can add layers to an OrderedDictand then pass them to Sequential:

from collections import OrderedDict

class MyModule(nn.Module):
    def __init__(self):
        super(MyModule, self).__init__()
        self.model = nn.Sequential(OrderedDict([
                ('conv1', nn.Conv2d(1, 8, 3, padding=1)),
                ('pool1', nn.MaxPool2d(2)),
                ('activ1', nn.ReLU()),
                ('conv2', nn.Conv2d(8, 16, 3, padding=1)),
                ('pool2', nn.MaxPool2d(2)),
                ('conv3', nn.Conv2d(16, 32, 3, padding=1)),
                ('activ2', nn.ReLU())]))

        self.fc = nn.Linear(512, 100)

    def forward(self, x):
        x = self.model(x)
        x = x.reshape(-1, 512)
        return torch.sigmoid(x)

printing an instance:

>>> m = MyModule()
>>> print(m)

MyModule(
  (model): Sequential(
    (conv1): Conv2d(1, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (activ1): ReLU()
    (conv2): Conv2d(8, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (conv3): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (activ2): ReLU()
  )
  (fc): Linear(in_features=512, out_features=100, bias=True)
)
5 Likes

Thanks a lot. is nn.ModuleDict an OrderedDict? Can we assume the order?

Sure no problem.

About your question, it’s not ordered, so you need to keep the order of the names in a list as the example above!

1 Like

In that case, looking at /torch/nn/modules/container.html, would we not be better off using an OrderedDict with nn.Sequential, ?, they gave an example like this :

    To make it easier to understand, here is a small example::

        # Example of using Sequential
        model = nn.Sequential(
                  nn.Conv2d(1,20,5),
                  nn.ReLU(),
                  nn.Conv2d(20,64,5),
                  nn.ReLU()
                )

        # Example of using Sequential with OrderedDict
        model = nn.Sequential(OrderedDict([
                  ('conv1', nn.Conv2d(1,20,5)),
                  ('relu1', nn.ReLU()),
                  ('conv2', nn.Conv2d(20,64,5)),
                  ('relu2', nn.ReLU())
                ]))
    """

by the way I noticed we can also do :

model = nn.Sequential()
model.add("conv1", nn.Conv2d(1,20,5))
model.add("relu1", nn.ReLU())
model.add('conv2', nn.Conv2d(20,64,5))
model.add('relu2', nn.ReLU())

1 Like

Yes, that’s even better! :blush:

So, I changed the previous example using OrderedDict:

from collections import OrderedDict

class MyModule(nn.Module):
    def __init__(self):
        super(MyModule, self).__init__()
        self.model = nn.Sequential(OrderedDict([
                ('conv1', nn.Conv2d(1, 8, 3, padding=1)),
                ('pool1', nn.MaxPool2d(2)),
                ('activ1', nn.ReLU()),
                ('conv2', nn.Conv2d(8, 16, 3, padding=1)),
                ('pool2', nn.MaxPool2d(2)),
                ('conv3', nn.Conv2d(16, 32, 3, padding=1)),
                ('activ2', nn.ReLU())]))

        self.fc = nn.Linear(512, 100)

    def forward(self, x):
        x = self.model(x)
        x = x.reshape(-1, 512)
        return torch.sigmoid(x)

and printing:

>>> m = MyModule()
>>> print(m)

MyModule(
  (model): Sequential(
    (conv1): Conv2d(1, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (activ1): ReLU()
    (conv2): Conv2d(8, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (conv3): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (activ2): ReLU()
  )
  (fc): Linear(in_features=512, out_features=100, bias=True)
)
1 Like

Thanks a lot, I like both answers , maybe it could be a better idea, if you could add the second solution to the accepted answer. that way, both ways are next to each other, and one can decide which one he/she likes better for his/her situation .

Yes, that’s a good idea! I will add this answer to the one marked as solution!

Thanks

1 Like