Hi,
I am working on a regression problem related to computational fluid dynamics by using residual NNs. My whole neural network is using fully connected layers.
Now during computations of neural networks I am using 2 different types of layers:
- Normal Fully connected layer
- Bottleneck layer (add the residue form the previous layer)
Typically this the network I am using:
CombustionModel(
(Fc1): Linear(in_features=2, out_features=500, bias=True)
(Fc2): Linear(in_features=500, out_features=500, bias=True)
(Fc3_bottleneck): Linear(in_features=500, out_features=100, bias=True)
(Fc4): Linear(in_features=100, out_features=500, bias=True)
(Fc5_bottleneck): Linear(in_features=500, out_features=100, bias=True)
(Fc6): Linear(in_features=100, out_features=500, bias=True)
(Fc7_bottleneck): Linear(in_features=500, out_features=100, bias=True)
(Fc8): Linear(in_features=100, out_features=500, bias=True)
(Fc9_bottleneck): Linear(in_features=500, out_features=100, bias=True)
(Fc10): Linear(in_features=100, out_features=500, bias=True)
(Fc11_bottleneck): Linear(in_features=500, out_features=100, bias=True)
(Fc12): Linear(in_features=100, out_features=7, bias=True)
)
For computing the output after one residual block following is the code I am using:
x = self.Fc1(x)
x = F.relu(x)
'''First ResNet Block'''
res_calc = self.Fc2(x)
res_calc = F.relu(res_calc)
res_calc = self.Fc3_bottleneck(res_calc)
x = F.relu(torch.add(x, res_calc))
now the the line x = F.relu(torch.add(x, res_calc))
gives error by telling that there is a dimension clash.
Typically the size of x in the computations of layers is torch.Size([128, 500])
and bottleneck’s size (torch.Size([128, <size>]))
vary. 128 is the batch size.
I want to downsample the x
tensor to same as res_calc
in order to add them, but it always gives the dimension clash.
I tried torch.nn.functional.interpolate
but it only works for 3d and so one inputs.
Any ideas about the solution to this problem would be helpful.
Regards.