Variable, Function and Module

Hello, I am very new to PyTorch and want to clarify my understanding of PyTorch.
Do I correctly understand each component of PyTorch ??

  1. Computational Graph
    Computational graph is composed of Variable and Function.
    Function is creator of Variable from other Variable
    Thus, a Function has type: Variable* => Variable*
    and can be viewed as a edge of computational graph, whereas node are the Variable.

If we only use predefined Function of PyTorch, then we can compute gradient directly using autograd.

  1. When to define new Function
    2.1) When Function is too complex to keep it in computational graph for backward (Which might slow down performance)

    2.2) When you are using not predefined Function (Cannot use autograd)

1 Like