Youtube video on autograd: create Variable, apply operations, backpropagate

6 Likes

Nice one, Hugh! :thumbsup:
I’d suggest to try a slightly calmer environment, though. The background noise is quite present :smile:
Also, the mouse clicks are really loud and almost cause my speakers to overdrive.

1 Like

Part 2: sharing weights across multiple timesteps, in a simplified RNN-style network:

This is not really about pytorch/autograd as such. Although I do use pytorch And I do sneak in a quick reference to autograd at the end :slight_smile: . Anhyway, sneaking these in here :slight_smile: . These basically give an intro to the maths of backprop in an rnn setting, going into chain rule, derivatives etc, and then writing it all out in python. Finally, do the same thing using autograd, which just takes one line :slight_smile:

part 1: introd, forward prop, including python code

part 2: backprop concepts

part 3: why backprop?

part 4: backprop maths. chainrule

part 5: definitions of gradOutput and gradWEights, for python code

part 6: python code

part 7: using autograd instead. one line :slight_smile:

Thank you very much!

1 Like

New video (well, I posted it in reply to one other thread actually, to be fair, but keeping the videos in one place):

“Create pytorch rnn functor, pass random input through it”

Next part: train the rnn to memorize/predict a sequence of integers. On the way, pass through adding a criterion, calculating the loss, backpropping thel oss, creating an optimizer, handling embedding/unembedding, and taking the argmax. Whew! :slight_smile: