How can I create custom dataloader with Multi30k dataset?

I’m trying to train the Attention model of the PyTorch tutorial via TPU in colab Env. So I need a DataLoader for torch_xla.distributed.parallel_loader.ParallelLoader() to train multi core TPU. But the process of converting Multi30k dataset to a DataLoader is getting trouble. What am I missing?

train_data, valid_data, test_data = Multi30k.splits(exts =('.de', '.en'), fields=(SRC, TRG))

SRC.build_vocab(train_data, min_freq=2)
TRG.build_vocab(train_data, min_freq=2)

# for item in torch.utils.data.DataLoader(train_data, batch_size=128): # Error occurs
#     print(item)

The error message is TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class ‘torchtext.data.example.Example’>.
Or if there’s another way to create ParallelLoader() of xla by using 'Bucketiterator`, please give me an advice.