PyTorch Tutorial for Deep Learning Researchers


I used TensorFlow for deep learning research until recently and started learning PyTorch this time.
PyTorch’s syntax is simpler than the TensorFlow, making it easier for me to implement the neural network model.

As I was studying PyTorch, I created the tutorial code.
I hope this tutorial will help you get started with PyTorch.


Yes whoever came up with pytorch’s high level design was a genius. I think its design is objectively superior to any other python framework. In TF or Theano you invariably end up ditching the object oriented style (if you had one to begin at all), in pytorch it makes too much sense to ditch.


The design was initially seeded from three libraries: torch-autograd, Chainer, LuaTorch-nn.
Then we iterated over it for over a month between Sam Gross, Adam Paszke, me, Adam Lerer, Zeming Lin with occasional input from pretty much everyone. We initially didn’t have a functional interface at all (F.relu() for example), and Sergey Zagoruyko had pestered us to death until we saw value in it, and hurriedly wrote it / committed it in the last minute.

We’re glad that you like it.


Thank you for these tutorials.

I recently went through a course on DL with Keras so I thought it would be a good idea to reproduce what I learned in that course and port it over and learn PyTorch.
It seems that I have got some fundamentals wrong in PyTorch. I copied your code for the linear regression sample, but it doesn’t correctly fit like it did in Keras. Obxviously I am missing something. :frowning: Tried different optimizers and learning rates.

PyTorch code

Keras code

What am I doing wrong?

Ok, I’ve realised my mistake. Number of epochs could be in the thousands, for example.

Great tutorial!

Here are some more of the best Deep Learning tutorials.

Adding my own recommended list:

Great modular structure to each PyTorch class:

Excellent for beginners:

And my own Jupyter notebooks:

In Korean:

1 Like