If a transposed convolution layer takes in a 3 channel input and gives out a 2 channel output, how many kernels would it have and why?

Isn’t the tconv layer supposed to have 6 kernels divided into two? One three kernels for one channel, another three kernels for another channel making two channels because the tconv layer is to give out a two channel output? I’m I right or what?

No, the out_channels value defines the number of kernels, and since it’s set to 2 the layer would use 2 kernels.
CS231n - Convolutional Layer explains the kernel shapes with some nice illustrations and A guide to convolution arithmetic for deep learning is one of the best references when it comes to shape calculations.