Modify PyTorch tutorial example to use dataloader and GPU

I’ve had success modifying the RNN Names Classifier example from the PyTorch tutorials for my own purposes with minimal modification of parameters: https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html

I’m using it to input a string of something called SMILES (a way of writing a structure of a chemical in text form where each character is important) and output a single value.

Recently I put the model on GPU using .to(device), but the GPU performance is about half the speed of the CPU, which is very unexpected given it’s a Nvidia 1080 vs a middling i5. I was told this may be because this example doesn’t use a dataloader, so I’ve been trying to implement one without success.

So I must ask you all, is it true that a dataloader with the right batch size will speed up the training of this kind of model on GPU?

And how do you implement a data loader that takes characters from a string and turns them into one hot vectors? Most every dataloader example I’ve found can do images (not useful here) or does text in a way that has vocabs and thousands of words. My total unique character count is around 80 so those examples needlessly complicate things.

Thanks all.