Thank you!
Transposing my data gave me the error below (which is clearer than what I got before), but your link helped!
Adding mode=‘RGB’ to the ToPILImage call resolved the issue.
(what I get when transposing my data into [C,H,W])
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/fetch.py", line 58, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "<ipython-input-169-4b1060b9c7d1>", line 61, in __getitem__
data = self.transform(self.X[idx])
File "/usr/local/lib/python3.9/dist-packages/torchvision/transforms/transforms.py", line 95, in __call__
img = t(img)
File "/usr/local/lib/python3.9/dist-packages/torchvision/transforms/transforms.py", line 227, in __call__
return F.to_pil_image(pic, self.mode)
File "/usr/local/lib/python3.9/dist-packages/torchvision/transforms/functional.py", line 283, in to_pil_image
raise ValueError(f"pic should not have > 4 channels. Got {pic.shape[-1]} channels.")
ValueError: pic should not have > 4 channels. Got 512 channels.