Hey everyone,

i am currently working with convolutional recurrent units (ConvLSTM & ConvGRU).

Both expect as Input a Tensor of shape:

`[batch_size, timestep, num_channels, height, width]`

.

For further processing I need the tensor to be of shape:

`[batch_size, num_channels, height, width]`

.

In my scenario I get the timesteps from saving previous 2 results [t-2, t-1, t] and stacking them along the 1 dimension. The result after the Recurrent Unit is of the same shape.

What is the best way to get from

`[ _, 3,_,_, _ ] to [ _, 1, _, _, _ ] `

such that i could do `mytensor.squeeze(1)`

I was thinking about doing:

```
my_output_tensor.shape()
# [8, 3, 2, 217, 512]
my_output_tensor = my_output_tensor[:,-1,:,:,:]
# [8, 2, 217, 512]
```

which gives me the output tensor for [t] but neglects the output of [t-2, t-1].

Alternatively I was thinking about appling a 1x1-3D Convolution along the timestep dimension to reduce the number of timestep features from 3 to 1.

I was wondering if there are any meaningful ways to reduce the dimensionality without loosing to much information

Thanks in Advance!

Cheers,

Sven