Is pytorch parrallel?

Hello !

I am trying to speed up my code and unfortunately don;t have a GPU that supports CUDA, so I wanted to know if I could parralelise my code (things like batch processing of input by a LSTM) to let it run over several cores. Is this automatically taken care of in pytorch or does one have to write extensive complicated modification to achieve this ?