 I was looking for simple example of Siamese Networks and I found this article at hackernoon.

input batch first go through `nn.ReflectionPad2d(1)` as shown below:

``````class SiameseNetwork(nn.Module):
def __init__(self):
super(SiameseNetwork, self).__init__()
self.cnn1 = nn.Sequential(
nn.Conv2d(1, 4, kernel_size=3),
nn.ReLU(inplace=True),
nn.BatchNorm2d(4),
nn.Dropout2d(p=.2),

nn.Conv2d(4, 8, kernel_size=3),
nn.ReLU(inplace=True),
nn.BatchNorm2d(8),
nn.Dropout2d(p=.2),

nn.Conv2d(8, 8, kernel_size=3),
nn.ReLU(inplace=True),
nn.BatchNorm2d(8),
nn.Dropout2d(p=.2),
)

self.fc1 = nn.Sequential(
nn.Linear(8*100*100, 500),
nn.ReLU(inplace=True),

nn.Linear(500, 500),
nn.ReLU(inplace=True),

nn.Linear(500, 5)
)

def forward_once(self, x):
output = self.cnn1(x)
output = output.view(output.size(), -1)
output = self.fc1(output)
return output

def forward(self, input1, input2):
output1 = self.forward_once(input1)
output2 = self.forward_once(input2)
return output1, output2
``````
1. why do siamese network (or the author’s approach) need padding before feeding into convolutions ?
2. what is the point of using `ReflectionPad` instead of zero padding ?
1. I guess the `ReflectionPad2d` layers were added as `nn.Conv2d` supported zero padding only in the past (in new PyTorch versions you can specify the `padding_mode`).
2. I don’t know if the author has explained this architecture in a research paper, but would guess that this padding type worked better than zero padding based on their experiments.
1 Like