When using torch.nn.functional.conv3d are the filters not repeats of themselves along the second dimension of the tensor?

I am currently trying to utilise the torch.nn.functional.conv3d function as I am trying to implement my own filters. The function calls for the filters to be of shape (out_channels, in_channels, kT, kH, kW).

Say I have a network thats first convolutional layer has 1 input and 8 filters of size 5,5,5 the filter would be a tensor of [8,1,5,5,5]. Then if the second layer has 16 filters it is [16,8,5,5,5]. But from what I understand along the second dimension wouldn’t the filters all be the same.

as to say you were to run the second layer of filters (SecondLayerFilters)

for filters in SecondLayerFilters[0]:
print(filters)

shouldn’t all the filters be the same? Am I miss understanding something

EDIT: I realise I didn’t making my problem that clear. I understand that the out_channels is basically the number of filters in that layer, but I don’t understand what the need for the in_channels. It’s my understand that the process will use the same filters for each in_channel? so are these filters just duplicates or are they all different?

Each filter will use different weights for the input channels, i.e. the weight of [16, 8, 5, 5, 5] will contain 16 filters each with a weight matrix of [8, 5, 5, 5] (with different values). The weight matrix will be multiplied with the input (of the same shape) for each window.

CS231n - Convolution explains this quite well for nn.Conv2d.