DataLoader returns error when iteration starts

I am training a BERT base model on the imdb dataset. However i am unable to iterate throught the Pytorch Dataloader.

Here is the code fully reproducible. training_file is the above mentioned csv.

import transformers
from sklearn import model_selection
import torch
import pandas as pd
tokenizer = transformers.BertTokenizer.from_pretrained("bert-base-cased", do_lower_case=True)
max_len = 512
train_batch_size = 8
This class takes reviews and targets as arguments 
- Split the reviews and tokenizes
class BERTDataset:
    def __init__(self, review, target): = review = target
        self.tokenizer = tokenizer
        self.max_len = max_len

    def __len__(self):
        return len(

    def __getitem__(self, item):
        review = str([item])
        review = " ".join(review.split())

        tokenized_inputs = self.tokenizer.encode_plus(

        ids = tokenized_inputs["input_ids"]
        mask = tokenized_inputs["attention_mask"]
        token_type_ids = tokenized_inputs["token_type_ids"]

        return {
            "ids": torch.tensor(ids, dtype=torch.long),
            "mask": torch.tensor(mask, dtype=torch.long),
            "token_type_ids": torch.tensor(token_type_ids, dtype=torch.long),
            "targets": torch.tensor([item], dtype=torch.float),

dfx = pd.read_csv(training_file).fillna("none")
dfx['sentiment'] = dfx['sentiment'].apply(lambda x: 1 if x == 'positive' else 0)

df_train, df_valid = model_selection.train_test_split(

# reset indices 
df_train = df_train.reset_index(drop=True)

# get ids, tokens, masks and targets  
train_dataset = BERTDataset(review=df_train['review'], target=df_train['sentiment'])

# load into pytorch dataset object
# DataLoader inputs tensor dataset of Inputs and targets 
train_data_loader =, batch_size=train_batch_size, num_workers=0)

# Iterating to the Data loader
train_iter = iter(train_data_loader)

review, labels =

Wehn iterating through the dataloader the following error comes up. Is it something to do with my dataformatting. Appreciate your inputs.

 RuntimeError Traceback (most recent call last)
    <ipython-input-19-c99d0829d5d9> in <module>()
          2 print(type(train_iter))
    ----> 4 images, labels =
    5 frames
    /usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/ in default_collate(batch)
         53             storage =
         54             out =
    ---> 55         return torch.stack(batch, 0, out=out)
         56     elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
         57             and elem_type.__name__ != 'string_':
    RuntimeError: stack expects each tensor to be equal size, but got [486] at entry 0 and [211] at entry 1

Try to set


This should force tokenizer to pad all sequences to self.max_len. Errors like stack expects each tensor during data loading usually means that you try to build batch from tensors with different sizes (jagged tensors are not allowed). You have either to

  1. ensure all tensors returned by __getitem__ have same size (I used this one)
  2. write custom collate_fn for dataloader and pad them by yourself
1 Like

Hi @mbednarski,

Thank you so much for the fix and the explanation. This works now.