Validation set problem

Hi everyone, I design my nn but i got error about different sizes. My training set size is [77,768] but my validation set size is [77,1,3] how can i fix this problem:
My loops are:

class Module(nn.Module):
        def __init__(self, D_in, H1, H2, D_out):
                        self.linear1 = nn.Linear(D_in, H1)
                        self.linear2 = nn.Linear(H1, H2)
                        self.linear3 = nn.Linear(H2, D_out)
        def forward(self, x):
                        x = F.relu(self.linear1(x))  
                        x = F.relu(self.linear2(x))
                        x = self.linear3(x)
                        return x
for e in range(epochs):
        running_loss = 0.0
        running_corrects = 0.0
        val_running_loss = 0.0
        val_running_corrects = 0.0
        for inputs,out in train_generator:
                inputs = inputs.view(inputs.shape[0], -1)
                #output = torch.squeeze(output)
                loss = criterion(output,out)
         with torch.no_grad():
                for val_inputs, val_labels in valid_generator:
                        val_inputs = val_inputs.view(val_inputs.shape[0], -1)
                        val_outputs = model(val_inputs)
                        val_loss = criterion(val_outputs, val_labels)
                        _, val_preds = torch.max(val_outputs, 1)
                        val_running_loss += val_loss.item()
                        val_running_corrects += torch.sum(val_preds ==


Usually a validation set is a subset of a whole dataset which means the structure of both training and validation should be consistent as you know. Could you please show a sample of your training set and validation set to track the issue? As you commented in your validation code, we can play with view ,etc but first, you need to make sure the data is transformed reliably.

Firstly I get datas from .txt file and datasets are storing in ‘DATA’ array and then
My manipulations are:

val_range = int(data_x.shape[0]/100) * 15
val_x = data_x[0:val_range, :,:]
train_x = data_x[val_range:None, :,:]

val_y = data_y[0:val_range, :]
train_y = data_y[val_range:None, :]

If you asked me the original size of data:

train_x=[19172,1,3] and train_y=[19172,1]

I can not understand why

training set size is [77,768] but validation set size is [77,1,3]

because the following code make the shape of the train and valid set the same:

val_range = int(data_x.shape[0]/100) * 15
val_x = data_x[0:val_range, :,:]
train_x = data_x[val_range:None, :,:]

Hi, it cannot be same because when I did the calculation, outputs is that:

train x size: torch.Size([77, 1, 16, 16, 3])
train y size:  torch.Size([77, 1, 16, 16])
val x size:  torch.Size([3465, 1, 3])
val y size:  torch.Size([3465, 1])

generally, we use the entire (partly) trained model to run on the val sets.
if the input shape of the train and val set have difference, u can change it to be consistent.
this is my suggestion.
hope that others give a good solution for this problem.

Based on the above code, you changing the sizes, before this, val and train are consistent. If you want to extract val dataset, first do the size changing then extract val.

First, just separate your x and y, then do the dimension changes as you want, the extract val using list comprehension which you already have done. In this situation the sizes should be consistent.

Note that we always extract train, test and val set from same source so they all have to be same thing.