Idea: Does it make sense to feedback high level features back into the initial layers?

In computer vision, the initial filters are known to output low level features such as edges/corners while the later layers capture high level information such as semantics of the image.

In human brain, the neurons are connected in extremely complex ways. This helps humans to send information back and forth the low level neurons and high level neurons. While in current artificial networks, they are connected in sequential manner.

Another advantage would be to decrease the number of parameters to be trained. Can act as regularisation.

Are there any works in this direction? Does it make sense to train networks in this loop manner?

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3,3, kernel_size=3, stride=1, padding=1)
        self.conv2 = nn.Conv2d(3,3, kernel_size=3, stride=1, padding=1)
    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = self.conv1(x)
        x = self.conv2(x)
        return x

# Forward pass
net = Net().cuda()
print(net)
net(torch.rand(2, 3, 512, 512).cuda()).shape

Output:

  Net(
  (conv1): Conv2d(3, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv2): Conv2d(3, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  )
  torch.Size([2, 3, 512, 512])

Generally, I would recommend to just try our your ideas, especially if you would like to test a specific hypothesis.
However, while the comparison between the human brain (or vision system) and neural networks fits a few use cases, I still think that a lot of understanding is still missing. :wink: