Reducing tensor dimensions with network

I have 2 input tensor and [16, 2, 310] is size of first and [16, 6] is size of second tensor.
My label size is [16, 5].
The size of the first tensor can be changed. ([16, 2, x], x=everything)
How can I give two tensors to the network that can finally have the same dimensions as the labels?
My data is well test data in petroleum engineering science.

How can two inputs have different dimensions ? Are they from different datasets ?
Your inputs need not be of the same dimensions as your output: they can be different, the nn.LazyLinear layer handles the dimensionality reduction part. That said, your inputs need to have similar dimensions, but before reducing/expanding the dimensions, you must ensure that doing so is plausible.

My data is well test data in petroleum engineering science.
In a well test operation, pressure is obtained as a function of time. which is the first tensor. And the well has few characteristics for which there is only one number. In other words, the pressure data set in terms of time is related to several parameters with only one numerical value.

It would be great if you can provide an example.

Ok this is my data

This is a data from my database.
pressure, time, well radius and … is input
kh, omega, lambda and C is output or label

Make a 1D CNN, RNN or Transformer branch for your sequential data(i.e. shape [batch_size, seq_len, features]), and a separate branch of linear layers for your orthogonal data(i.e. shape [batch_size, features]). Then concatenate those outputs into a final head(i.e. linear layer with outputs size matching your labels).

Make sure to also normalize all data to a range of 0 and 1.

What is meant by normalization?
How I can normalize all data?

Max-min is the most common: