Is there a specific way in which to pass your batches to a branched network?
For example say your batch had dimensions [N,C,H,W] of [67,2,64,64], and your network looked like the following:
class BranchedNet(torch.nn.Module):
def __init__(self):
super().__init__()
self.feature_extractor = torch.nn.Sequential(
torch.nn.Conv2d(1,64,3,padding=1),
torch.nn.ReLU(),
torch.nn.Conv2d(64,128,3,padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(2),
torch.nn.Conv2d(128,256,3,padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(2))
self.feature_extractor2 = torch.nn.Sequential(
torch.nn.Conv2d(1,64,3,padding=1),
torch.nn.ReLU(),
torch.nn.Conv2d(64,128,3,padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(2),
torch.nn.Conv2d(128,256,3,padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(2))
self.classifier = torch.nn.Sequential(
torch.nn.Linear((256*16*16)*2,264),
torch.nn.ReLU(),
torch.nn.Linear(264,264),
torch.nn.ReLU(),
torch.nn.Linear(264,1))
def forward(self,x,y):
features = self.feature_extractor(x)
features = features.view(int(x.size()[0]),-1)
features2 = self.feature_extractor(y)
features2 = features2.view(int(y.size()[0]),-1)
grouped = torch.cat(features,features2)
output = self.classifier(grouped)
return output
Would you have to specify what x and y are during training?
I only ask because I have attempted this with the above structures and I get an error:
TypeError: forward() missing 1 required positional argument: ‘y’
…when simply passing model(batch)
during training
EDIT:
Solved by @ptrblck -see Concatenate layer output with additional input data