Could PyTorch use nn.DataParallel for training in Windows

Could PyTorch use nn.DataParallel for training in Windows

If yes, could any body give me some example? thanks a lot.

I try it for LSTM model, but cannot work