Convert non-image data into image-like data to use CNN

I have 3 kinds of spatial data to use as inputs. I want to simulate them as image data with 3 channels. Those data are all 5x35 size with 5000 samples.

for i in range(5000):
    in_x.append([x[i].reshape(35,5), delta[i].reshape(35,5), np.pad(z[i], (0,170),'wrap').reshape(35,5)])

The three types of data are x, delta and z. They were input seperately with sample size 5000.

delta=np.loadtxt('./input_delta4.out')
x=np.loadtxt('./input_x4.out')
z=np.loadtxt('./input_z4.out')

But I find my transfering method to be wrong.

dataiter = iter(dataloaders['train'])
images, labels = dataiter.next()
len(images)

This outputs a length of 5000. But it should be 3, right?
How can I fix it?

I don’t know how you are exactly creating the Dataset or DataLoader but this small example shows that the batch size is used as expected:

dataset = torch.utils.data.TensorDataset(torch.randn(5000, 35, 5), torch.randn(5000, 35, 5), torch.randn(5000, 35, 5))
loader = torch.utils.data.DataLoader(dataset, batch_size=20)

a, b, c = next(iter(loader))

print(a.shape)
# torch.Size([20, 35, 5])
print(b.shape)
# torch.Size([20, 35, 5])
print(c.shape)
# torch.Size([20, 35, 5])

Thanks!!!

I have figured it out