If each feature I want my neural network to analyse is a 2d array, would this be possible to pass to a linear layer or would it require flattening?

Im fairly new to Neural networks so any help would appreciated

Thanks

If each feature I want my neural network to analyse is a 2d array, would this be possible to pass to a linear layer or would it require flattening?

Im fairly new to Neural networks so any help would appreciated

Thanks

Hi,

Yes, it would require flattening if you want to consider all the elements of the array. I have put a small example printing the size of the output when you give a 2D input to a Linear layer without flattening.

```
# 2d input
linear_layer_2d = nn.Linear(in_features=64, out_features=32)
# 1st dimension (128) = batch dimension, input 64 x 64
input_2d = torch.randn(128, 64, 64)
output_2d = linear_layer_2d(input_2d)
print(output_2d.size())
# torch.Size([128, 64, 32])
# 1d input (2d flattened)
linear_layer_1d = nn.Linear(in_features=4096, out_features=32)
# input_1d size = [128, 4096]
input_1d = torch.flatten(input_2d, start_dim=1)
output_1d = linear_layer_1d(input_1d)
print(output_1d.size())
# torch.Size([128, 32])
```

Let me know if it’s not clear!

2 Likes

Hi,

I have a similar question. I am trying to train a simple NN which takes 2-d tensor as input data, and outputs a 2-d tensor. Specifically, I would like to have an input and output of shape 16x2. Outputs of the hidden layers should also be 2d. (ignoring batch size).

if, for example, my batch size is 128, my requirement is below:

input shape: 128 x 16 x 2

1st hidden layer output shape: 128 x 256 x 2

2nd hidden layer output shape: 128 x 512 x 2

3rd hidden layer output shape: 128 x 256 x 2

output shape: 128 x 16 x 2

I wonder how to design and implement such a NN. Would you please comment?

Best wishes,