Using TensorDataset instead of ImageFolder in DCGAN example

Instead of working with image files such as .png and .jpg, I’ve extracted three variables u, v, w from a NETCDF file, with dimension as (hours, number of grid points in X, number of grid points in Y). The u,v,w components can be seen as three separate channels, just like an image has RGB channels.Afterwards, I transform all the components into tensors using torch.from_numpy, and reshape them into (hours, channel, gridX, gridY), and concatenate them to get my data. Thus the tensor dimension is (745, 3, 128, 128). Furthermore I have low resolution data on the form (745, 3, 64, 64).

Concatenating the tensors together like three RGB channels

HR_data = torch.cat((u_tensor,v_tensor,w_tensor), dim=1) # output = dim ( 745, 3, 128, 128)
LR_data = torch.cat((u_tensor_lr,v_tensor_lr,w_tensor_lr), dim=1) # output = dim ( 745, 3, 64, 64)

Normalizing

HR_data_norm = (HR_data - HR_data.mean()) / HR_data.std()
LR_data_norm = (LR_data - LR_data.mean()) / LR_data.std()

Creating training set

dataset_train = torch.utils.data.TensorDataset(LR_data_norm, HR_data_norm)
trainloader = torch.utils.data.DataLoader(dataset_train, batch_size=batchSize,
shuffle=True, num_workers=0)

I’m trying to fit my tensor data into a dcgan, using this as a basis https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html , and I’m treating my data just like images as they are transformed into tensor form. I have set some hyperparameters such as:
nz = HR_data.shape[0] #size of generator input
ngf = 64 #size of feature maps in generator
ndf = 128 #size of feature maps in discriminator
batchSize = 64

However, when I try to train the discriminator with the real image, i.e. high resolution data, I get this error:
“ValueError: Target and input must have the same number of elements. target nelement (64) != input nelement (1600)”
which means that the binary cross entropy function need the input and target to be of the same dimension. My target dim is of 64 (I guess it’s the batch size), and output dim is of 1600.
My question is therefore, how do I change the training loop and the DCGAN_example code to work with my data? I think there must be something wrong in my discriminator.

tl;dr: I want to train DCGAN with low and high resolution data. How to change DCGAN_example with TensorDataset instead of ImageFolder?