ValueError: too many values to unpack (expected 2)

I am making a LSTM model for Sentiment Analysis in PyTorch. I am using a Twitter Dataset for this purpose and have used Scikit-learn for data splitting. This is how I am doing it.

# training params
batch_size = 25
epochs = 5 # 3-4 is approx where I noticed the validation loss stop decreasing

counter = 0
print_every = 100
clip=5 # gradient clipping

# move model to GPU, if available
#if(train_on_gpu):
#    net.cuda()
    
net.train()
# train for some number of epochs
for e in range(epochs):
    
    # initialize hidden state
    h = net.init_hidden(batch_size)

    # batch loop
    for inputs, labels in X_train:
        counter += 1

        #if(train_on_gpu):
         #   inputs, labels = inputs.cuda(), labels.cuda()

        inputs, labels = inputs.to(device), labels.to(device)

        # Creating new variables for the hidden state, otherwise
        # we'd backprop through the entire training history
        h = tuple([each.data for each in h])

        # zero accumulated gradients
        net.zero_grad()

        # get the output from the model
        output, h = net(inputs, h)

        # calculate the loss and perform backprop
        loss = criterion(output.squeeze(), labels.float())
        loss.backward()
        # `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
        nn.utils.clip_grad_norm_(net.parameters(), clip)
        optimizer.step()

        # loss stats
        if counter % print_every == 0:
            # Get validation loss
            val_h = net.init_hidden(batch_size)
            val_losses = []
            net.eval()
            for inputs, labels in X_test:

                # Creating new variables for the hidden state, otherwise
                # we'd backprop through the entire training history
                val_h = tuple([each.data for each in val_h])

                #if(train_on_gpu):
                #    inputs, labels = inputs.cuda(), labels.cuda()

                inputs, labels = inputs.to(device), labels.to(device)
                
                output, val_h = net(inputs, val_h)
                val_loss = criterion(output.squeeze(), labels.float())

                val_losses.append(val_loss.item())

            net.train()
            print("Epoch: {}/{}...".format(e+1, epochs),
                  "Step: {}...".format(counter),
                  "Loss: {:.6f}...".format(loss.item()),
                  "Val Loss: {:.6f}".format(np.mean(val_losses)))

This is what I get -

 19     # batch loop
---> 20     for inputs, labels in X_train:
     21         counter += 1
     22 

ValueError: too many values to unpack (expected 2)

Can any of you guys help me to get this problem solved? I am very much in the last step. I am clueless that what to do next. This is a link to my Github page for this problem. You can check it anytime. Thanks in advance.

Hi,

What is X_train?

And you have not added your Github account link.

1 Like

Thank you sir for response. I have applied Keras instead of PyTorch. I am done with the Sentiment Analysis. Thank you again.