Could you make pure Python code at least for layers for Pytorch API? Like a switch in to make it possible

It might be good for fast experimenting. And make at least me happy in that question.
The big work has been done for performance implementing JIT. Could this work be done for research?

All the meat in Caffe2? I mean logic of the layers(not all) and e.t.c


UPDATED QUESTION: The main question remains the same. Are neural networks layers are in Caffe2? I have read an article about PyTorch internals and I am really excited about it. So the main component that stops from understanding PyTorch code is cwrap that use custom YAML files to generate code for different types of tensors in C++. I read the code before that and I always had questions what is that all about. The whole process is really smart and it’s inherited from Lua torch especially cwrap. The main minus is that it’s impossible to get a code directly. You have to go through a very long story about cwrap to find the code for layers. Is that possible to have Python code for layers for experimenting? Like in, we can choose develop mode to use purely Python code at least for layers.

ORIGINAL QUESTION: Where is the F.conv2d source code?

So far I understood that all meat(all heavy computation and the most interesting tasks) in Caffe2. So far the whole logic of PyTorch is that: Aten is the core library then Autograd augmented to Aten, if you need Convolutional Layer or whatever you want you go to Caffe2, then it all dynamically fetched from different places and we get PyTorch.
Is this statement true?


I followed from .. import functional as F from and it led to this code:

conv2d = _add_docstr(torch.conv2d, r"""...

which is adding docstring located at
And PyTorch docs at don’t have a link to source code.
Where is the actual code for F.conv2d?

UPDATE: What I understood so far is that code is generated automatically in C++ and there is a connection through pybind11 between Python and C++ code. So far my little research on backpropagation in PyTorch has stopped on the convolutional layer which is not accessible. I would be happy if someone provided a guide or tutorial on Aten which is a complete black box with no manual.

UPDATE: There is a cool article explaining the code generation procedure as far as I understood I am not far but it really helped

UPDATE: I tried to look at the installed torch on my local server. There is no ready to use convolutional layer as I thought that we are receiving ready to use a code when we install the package. Therefore, I assume that code is generated completely on the fly even on the local machine or not? There is no csrc so it compiled to some unknown place. Where is that place? Is that lib folder?

UPDATE: I can see a lot of files with * caffe2 *.so. Does it mean that PyTorch relies heavily on Aten although it is moving to c10 and Caffe2?

PS: Sorry for spam, just want to understand the whole picture of PyTorch.