# Downsampling 2-dimensional tensors

Hi,
I am working on a regression problem related to computational fluid dynamics by using residual NNs. My whole neural network is using fully connected layers.
Now during computations of neural networks I am using 2 different types of layers:

1. Normal Fully connected layer
2. Bottleneck layer (add the residue form the previous layer)

Typically this the network I am using:

CombustionModel(
(Fc1): Linear(in_features=2, out_features=500, bias=True)
(Fc2): Linear(in_features=500, out_features=500, bias=True)
(Fc3_bottleneck): Linear(in_features=500, out_features=100, bias=True)
(Fc4): Linear(in_features=100, out_features=500, bias=True)
(Fc5_bottleneck): Linear(in_features=500, out_features=100, bias=True)
(Fc6): Linear(in_features=100, out_features=500, bias=True)
(Fc7_bottleneck): Linear(in_features=500, out_features=100, bias=True)
(Fc8): Linear(in_features=100, out_features=500, bias=True)
(Fc9_bottleneck): Linear(in_features=500, out_features=100, bias=True)
(Fc10): Linear(in_features=100, out_features=500, bias=True)
(Fc11_bottleneck): Linear(in_features=500, out_features=100, bias=True)
(Fc12): Linear(in_features=100, out_features=7, bias=True)
)

For computing the output after one residual block following is the code I am using:

x = self.Fc1(x)
x = F.relu(x)

'''First ResNet Block'''
res_calc = self.Fc2(x)
res_calc = F.relu(res_calc)
res_calc = self.Fc3_bottleneck(res_calc)

now the the line x = F.relu(torch.add(x, res_calc)) gives error by telling that there is a dimension clash.
Typically the size of x in the computations of layers is torch.Size([128, 500]) and bottleneck’s size (torch.Size([128, <size>])) vary. 128 is the batch size.
I want to downsample the x tensor to same as res_calc in order to add them, but it always gives the dimension clash.
I tried torch.nn.functional.interpolate but it only works for 3d and so one inputs.

Regards.

If I understood it right, Fc1 should be as below to take input 128x128x500

(Fc1): Linear(in_features=500, out_features=500, bias=True)

Just wondering why it didn’t throw error there?

Making the network as below:

CombustionModel(
(Fc1): Linear(in_features=500, out_features=500, bias=True)
(Fc2): Linear(in_features=500, out_features=500, bias=True)
(Fc3_bottleneck): Linear(in_features=500, out_features=100, bias=True)
...
)

Thus, forward() will look like

# input 128x128x500 batchx128x500
x = self.Fc1(x) # 128x128x500 in_features=500, out_features=500
x = F.relu(x) # 128x128x500

'''First ResNet Block'''
res_calc = self.Fc2(x) # 128x128x500
res_calc = F.relu(res_calc) # 128x128x500
res_calc = self.Fc3_bottleneck(res_calc) # 128x128x100 in_features=500, out_features=100

Of course, size doesn’t match.

no the input size is 128 x 2, 128 is batch size and size of input feature vector is 2.
It is not a computer vision problem it is CFD related regression problem. I am learning a transport equation mapping via neural network with outputs 7 dimensional vector as a prediction.

I am trying to down-sample the 128 x 500 to 128 x <bottleneck_layer_size> bottleneck_layer_size is always less then 500. Is there any method like interpolation which down-samples this vector

Typically the size of x in the computations of layers is torch.Size([128, 500])

This line is confusing, are you referring to input x or tensor x just before torch.add?

Nevertheless you implementation will throw size mismatch error:

# input 128x128x2 batchx128x2
x = self.Fc1(x) # output size 128x128x500 in_features=2, out_features=500
x = F.relu(x) # output size 128x128x500

'''First ResNet Block'''
res_calc = self.Fc2(x) # 128x128x500
res_calc = F.relu(res_calc) # 128x128x500
res_calc = self.Fc3_bottleneck(res_calc) # 128x128x100 in_features=500, out_features=100

you cant add two tensor with different size.

Possible solution can be:

CombustionModel(
(Fc1): Linear(in_features=2, out_features=500, bias=True)
(Fc2): Linear(in_features=500, out_features=500, bias=True)
(Fc3): Linear(in_features=500, out_features=500, bias=True)
(Fc4_bottleneck): Linear(in_features=500, out_features=100, bias=True)
...
)

forward():

# input 128x128x2 batchx128x2
x = self.Fc1(x) # output size 128x128x500
x = F.relu(x) # output size 128x128x500
fc2_out = self.Fc2(x) # 128x128x500
fc2_out = F.relu(fc2_out) # 128x128x500
fc3_out = self.Fc3(x) # 128x128x500
fc3_out = F.relu(fc3_out) # 128x128x500
Fc4b_out = self.Fc4_bottleneck(res) # 128x128x100

Typically the size of x in the computations of layers is torch.Size([128, 500])

sorry for confusion:
here is the detail:

CombustionModel(
(Fc1): Linear(in_features=2, out_features=500, bias=True)
(Fc2): Linear(in_features=500, out_features=500, bias=True)
(Fc3_bottleneck): Linear(in_features=500, out_features=100, bias=True)
......

Following is the size of every layer:

print(x.size()) #torch.Size([128, 500]), x is input of network
x = self.Fc1(x) #128x500
x = F.relu(x) #128x500

'''First ResNet Block'''
res_calc = self.Fc2(x) #128x500
res_calc = F.relu(res_calc) #128x500
res_calc = self.Fc3_bottleneck(res_calc) #128X100
x = F.relu(torch.add(x, res_calc)) # x is 128x500 whereas res_calc is 128x100

I know, I can’t add two different tensors of different sizes but there must be a way for downsampling.
I want to downsample x to 128x100.
Is their any solution by pytorch to downsample this or any other way outside pytorch?
Your suggested solution has one more extra layer worth of computations before every bottleneck layer which is not very efficient.

>>> input = torch.rand(128,1,128,500)
>>> output = torch.nn.functional.interpolate(input, size=[128,100], mode='nearest', align_corners=None)
>>> output.size()
torch.Size([128, 1, 128, 100])

As I have stated earlier in my opening statement that torch.nn.functional.interpolate() only works for the 3d-data and so on. I am purely dealing with 2-d data. Solution which you suggested is not a legible one for my case. as I am dealing with 2d- data. Read my last answer, in that I have clearly stated dimensions of batched in every step.

input = torch.rand(128,1,128,500) is a 3D representation of 2D data. Just as add new dimension to adapt to the function, it doesn’t affect your data processing.

3D to 4D tensor:

input = torch.rand(128,128,500) # 128x128x500
input = input.unsqueeze(1) # 128x1x128x500

2D to 3D tensor:

input = torch.rand(128,500) # 128x500
input = input.unsqueeze(0) #1x128x500