Why does the 1D convolution have a 3rd channel?

I was looking at the 1D convolution (http://pytorch.org/docs/master/nn.html#conv1d) and the input image is of size (N,C_in,L_in). I am wondering, why is there a third channel? For a 1D convolution I would have expected that we convolve in a 1D line, so each “line” (or vectors of size (1,C_in)) would be a data set resulting in a data set of size (N,C_in). If that is the case why do we need a third channel? What is its meaning and how does it affect the output of the convolution?

For 2D images I guess the 3rd channel is RGB, so its usually 3, which makes sense there but I don’t get it for 1D or how it affects the output of the convolution.

C_in is actually the channel dimension. The convolution is along the L_in dimension. If you don’t need more than 1 channel, just set it to 1. It can be useful in case like neural machine translation with embedding input.