Nice one, Hugh!
I’d suggest to try a slightly calmer environment, though. The background noise is quite present
Also, the mouse clicks are really loud and almost cause my speakers to overdrive.
Part 2: sharing weights across multiple timesteps, in a simplified RNN-style network:
This is not really about pytorch/autograd as such. Although I do use pytorch And I do sneak in a quick reference to autograd at the end . Anhyway, sneaking these in here . These basically give an intro to the maths of backprop in an rnn setting, going into chain rule, derivatives etc, and then writing it all out in python. Finally, do the same thing using autograd, which just takes one line
part 1: introd, forward prop, including python code
part 2: backprop concepts
part 3: why backprop?
part 4: backprop maths. chainrule
part 5: definitions of gradOutput and gradWEights, for python code
part 6: python code
part 7: using autograd instead. one line
Thank you very much！
New video (well, I posted it in reply to one other thread actually, to be fair, but keeping the videos in one place):
“Create pytorch rnn functor, pass random input through it”
Next part: train the rnn to memorize/predict a sequence of integers. On the way, pass through adding a criterion, calculating the loss, backpropping thel oss, creating an optimizer, handling embedding/unembedding, and taking the argmax. Whew!