How to connect/hook two (or even more) models together

I have a baseline (eg VGG) and i want to connect several small models to different place of the baseline.
For example for simplicity I am going to do it at the end of my baseline here. Then I want to train and share the losses.
I did the following but i am not sure why i got this error.
Can you please help/guide me
Thanks :slight_smile:

So I have the feature part of vgg like this:

vgg16 = models.vgg16(pretrained=True).to(device)
vgg_feature = vgg16.features

if I print it I will have:

print("vgg_feature:\n",vgg_feature)
print(type(vgg_feature))
 vgg_feature:
 Sequential(
  (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (1): ReLU(inplace)
  (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (3): ReLU(inplace)
...
  (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (25): ReLU(inplace)
  (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (27): ReLU(inplace)
  (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (29): ReLU(inplace)
  (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
<class 'torch.nn.modules.container.Sequential'>

Now I have one of the new model that I want to attach to the baseline:

Attch1 = nn.ModuleList([])
Attch1.append(nn.Conv2d(512, 4, 16))
Attch1 = nn.Sequential(*Attch1)
print(Attch1)
print(type(Attch1))
Sequential(
  (0): Conv2d(512, 4, kernel_size=(16, 16), stride=(1, 1))
)
<class 'torch.nn.modules.container.Sequential'>

Then I thought I can do it like this:

import itertools
def forward(x):
        xs = []
        for name, m in itertools.chain(vgg_feature,Loc2):
            m(x)
            print(name, m)

and call the forward function:
forward(torch.rand(1,3,512,512))

But I get this error:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-19-09f6aa1136a0> in <module>()
----> 1 forward(torch.rand(1,3,512,512))

<ipython-input-14-2f8da09fc583> in forward(x)
      3         xs = []
      4         for name in itertools.chain(vgg_feature,Attch1):
----> 5             name(x)
      6             print(name)
      7 

~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    489             result = self._slow_forward(*input, **kwargs)
    490         else:
--> 491             result = self.forward(*input, **kwargs)
    492         for hook in self._forward_hooks.values():
    493             hook_result = hook(self, input, result)

~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input)
    299     def forward(self, input):
    300         return F.conv2d(input, self.weight, self.bias, self.stride,
--> 301                         self.padding, self.dilation, self.groups)
    302 
    303 

RuntimeError: Given groups=1, weight[64, 64, 3, 3], so expected input[1, 3, 512, 512] to have 64 channels, but got 3 channels instead

So if i give a 3x512x512 input to vgg base i expect to get an out with size of 512x16x16 and then i am sending that to the other model (Attach1), but im not sure why the above error happens


Out = vgg_feature[:](torch.rand(1,3,512,512))
print(Out.size())
torch.Size([1, 512, 16, 16])

Update:
I also tried this:

import itertools
def forward(x):
        xs = []
        for name, m in itertools.chain(vgg_feature._modules.items(),Attch1._modules.items()):
            m(x)
            print(name)

            

But got the same error:/
forward(torch.rand(1,3,512,512))

0
1

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-27-09f6aa1136a0> in <module>()
----> 1 forward(torch.rand(1,3,512,512))

<ipython-input-26-0e90031a3b93> in forward(x)
      3         xs = []
      4         for name, m in itertools.chain(vgg_feature._modules.items(),Attch1._modules.items()):
----> 5             m(x)
      6             print(name)
      7 

~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    489             result = self._slow_forward(*input, **kwargs)
    490         else:
--> 491             result = self.forward(*input, **kwargs)
    492         for hook in self._forward_hooks.values():
    493             hook_result = hook(self, input, result)

~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input)
    299     def forward(self, input):
    300         return F.conv2d(input, self.weight, self.bias, self.stride,
--> 301                         self.padding, self.dilation, self.groups)
    302 
    303 

RuntimeError: Given groups=1, weight[64, 64, 3, 3], so expected input[1, 3, 512, 512] to have 64 channels, but got 3 channels instead

Update 2:
I am not sure where the problem is from, but when i even do this:

import itertools
def forward(x):
        xs = []
        for name, m in itertools.chain(vgg_feature._modules.items()):
            print(name,m)
            m(x)

forward(torch.rand(1,3,512,512))

I get the error which makes me confuse :frowning:

0 Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
1 ReLU(inplace)
2 Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-42-bb552d02b5df> in <module>()
      6             m(x)
      7 
----> 8 forward(torch.rand(1,3,512,512))

<ipython-input-42-bb552d02b5df> in forward(x)
      4         for name, m in itertools.chain(vgg_feature._modules.items()):
      5             print(name,m)
----> 6             m(x)
      7 
      8 forward(torch.rand(1,3,512,512))

~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    489             result = self._slow_forward(*input, **kwargs)
    490         else:
--> 491             result = self.forward(*input, **kwargs)
    492         for hook in self._forward_hooks.values():
    493             hook_result = hook(self, input, result)

~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input)
    299     def forward(self, input):
    300         return F.conv2d(input, self.weight, self.bias, self.stride,
--> 301                         self.padding, self.dilation, self.groups)
    302 
    303 

RuntimeError: Given groups=1, weight[64, 64, 3, 3], so expected input[1, 3, 512, 512] to have 64 channels, but got 3 channels instead

Try to assign the result back to x so that the result from m(x) will be fed to the next module.

1 Like

@ptrblck

Interesting!!

def forward(x):
        xs = []
        for name, m in itertools.chain(vgg_feature._modules.items(),Attch1._modules.items()):
            print(name,m)
            x = m(x)
        return x
a = forward(torch.rand(1,3,512,512))

it solved the problem! But why should i do that?

Also can you please let me know your opinion that if the way that i did is a good way in general or not?

if i want to take the output of different layers at the same time
(e.g. last layer after (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
and the layer
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False), ) and send them to two different network (e.g. Attach 1 and Attach2),
how should i do that in this case?

Thanks a lot

You have to assign the return value, because otherwise the calculated result will be lost. Your model does not store the result in-place in x, but returns it’s output.

I would rather create a new nn.Sequential module using your sub-modules than iterating the parts.
Another approach would be to create an own nn.Module and implementing forward yourself. That would give you more flexibility especially regarding your use case using activations from different layers.

Thanks for the suggestions.
Is there any example regarding those two suggested methods so I can read them and and try to learn from them?

Here you can find an example of creating an own nn.Module.

1 Like

Oh i see what you mean.
I will give it a try and come back if i face issues 8-|
Thanks a lot Boss!