Dimension questions about ToTensor()

hello,i have a problem.i load pics in grayscale use:
then transform it to Tensor use:
in train mode,i found the result have 3 channels like [b,3,h,w]
but in test mode, i found the result have 1 channel like [b,1,h,w]
the transform code is totally the same, just ToTensor()

how can i handle this?


can you show us a bit of you code, because this seems strange?