Autograd/backward in RNN


I’m developing a framework and especially, RNN modules for capstone project using C++.

To develop our project, I just see how pytorch works related with calculating gradients of RNNs.

While trying to find source code, especially in /pytorch/torch/nn/modules/
there is a variable named _rnn_impls with _VF.rnn_tanh and _VF.rnn_relu.
In addition, in forward function in RNN class, the results are returned using impl function.

However, I could not find what the VF means and how to use it.
I find there is a VF class in module, but it is very short code and not enough information for me.
Is that related with backend c source code ?
Could you help me with how it works and finding related source codes?