RuntimeError: sizes must be non-negative within the context of Pytorch/text

Hi friends,
I am attempting to implement a neural translator model from one source language to another but I am stumped on an issue when working with a large europarl dataset within pytorch/text. I attempted to raise the issue on github to no avail. For deeper context I am working with here is the runtime error that occurs

File "train.py", line 97, in main
opt.train = create_dataset(opt, SRC, TRG)
File "Documents\transformers\Process.py", line 89, in create_dataset
opt.train_len = get_len(train_iter)
File "Documents\transformers\Process.py", line 95, in get_len
for i, b in enumerate(train):
File "C:\Users\Anaconda3\envs\alexandria\lib\site-packages\torchtext\data\iterator.py", line 157, in __iter__
yield Batch(minibatch, self.dataset, self.device)
File "C:\Users\Anaconda3\envs\alexandria\lib\site-packages\torchtext\data\batch.py", line 34, in __init__
setattr(self, name, field.process(batch, device=device))
File "C:\UsersAnaconda3\envs\alexandria\lib\site-packages\torchtext\data\field.py", line 201, in process
tensor = self.numericalize(padded, device=device)
File "C:\Users\Anaconda3\envs\alexandria\lib\site-packages\torchtext\data\field.py", line 323, in numericalize
var = torch.tensor(arr, dtype=self.dtype, device=device)
RuntimeError: sizes must be non-negative

The error occurs when there is the attempt to enumerate through the iterator to retrieve the count this is based on the transformer model found here.

def create_dataset(opt, SRC, TRG):

    print("creating dataset and iterator... ")

    raw_data = {'src' : [line for line in opt.src_data], 'trg': [line for line in opt.trg_data]}
    df = pd.DataFrame(raw_data, columns=["src", "trg"])
    
    mask = (df['src'].str.count(' ') < opt.max_strlen) & (df['trg'].str.count(' ') < opt.max_strlen)
    df = df.loc[mask]

    df.to_csv("translate_transformer_temp.csv", index=False)
    
    data_fields = [('src', SRC), ('trg', TRG)]
    train = data.TabularDataset('./translate_transformer_temp.csv', format='csv', fields=data_fields)

    train_iter = MyIterator(train, batch_size=opt.batchsize, device=opt.device,
                        repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
                        batch_size_fn=batch_size_fn, train=True, shuffle=True)
    
    os.remove('translate_transformer_temp.csv')

    if opt.load_weights is None:
        SRC.build_vocab(train)
        TRG.build_vocab(train)
        pickle.dump(SRC, open('weights/SRC.pkl', 'wb'))
        pickle.dump(TRG, open('weights/TRG.pkl', 'wb'))

    opt.src_pad = SRC.vocab.stoi['<pad>']
    opt.trg_pad = TRG.vocab.stoi['<pad>']

    opt.train_len = get_len(train_iter)

    return train_iter

def get_len(train):

    for i, b in enumerate(train):
        pass
    
    return i

It would be of great help if someone could guide me in this issue. My guesses are perhaps it has to do with the batch size or some memory issue with CUDA because it wasn’t a problem on smaller datasets so I am unsure. I am running a GTX960M in a conda environment within windows.