Hello, I am very new to PyTorch and want to clarify my understanding of PyTorch.
Do I correctly understand each component of PyTorch ??
 Computational Graph
Computational graph is composed of Variable and Function.
Function is creator of Variable from other Variable
Thus, a Function has type: Variable* => Variable*
and can be viewed as a edge of computational graph, whereas node are the Variable.
If we only use predefined Function of PyTorch, then we can compute gradient directly using autograd.

When to define new Function
2.1) When Function is too complex to keep it in computational graph for backward (Which might slow down performance)2.2) When you are using not predefined Function (Cannot use autograd)