I used TensorFlow for deep learning research until recently and started learning PyTorch this time.
PyTorch’s syntax is simpler than the TensorFlow, making it easier for me to implement the neural network model.
As I was studying PyTorch, I created the tutorial code.
I hope this tutorial will help you get started with PyTorch.
Yes whoever came up with pytorch’s high level design was a genius. I think its design is objectively superior to any other python framework. In TF or Theano you invariably end up ditching the object oriented style (if you had one to begin at all), in pytorch it makes too much sense to ditch.
The design was initially seeded from three libraries: torch-autograd, Chainer, LuaTorch-nn.
Then we iterated over it for over a month between Sam Gross, Adam Paszke, me, Adam Lerer, Zeming Lin with occasional input from pretty much everyone. We initially didn’t have a functional interface at all (F.relu() for example), and Sergey Zagoruyko had pestered us to death until we saw value in it, and hurriedly wrote it / committed it in the last minute.
I recently went through a course on DL with Keras so I thought it would be a good idea to reproduce what I learned in that course and port it over and learn PyTorch.
It seems that I have got some fundamentals wrong in PyTorch. I copied your code for the linear regression sample, but it doesn’t correctly fit like it did in Keras. Obxviously I am missing something. Tried different optimizers and learning rates.
What am I doing wrong?
Ok, I’ve realised my mistake. Number of epochs could be in the thousands, for example.