How can I modify c++ files in pytorch?

How can I modify c++ files such as caffe2 or csrc c files? I can only find header files so that I can’t try to modify to test what I want.

Pytorch is calling methods from c++ files and I want to make changes in the c++ files such as caffe2/serialize/inline_container to test what I want to see.

I am totally lost as a starter and its been 2 weeks I am trying to get familiar with this software…

I beg your generousity.

You can find the inline_container.cc file here and it will have the same relative location in your locally closed GitHub repository.
Manipulate the file locally and rebuild PyTorch afterwards.

1 Like

Thanks for answering! I was trying to follow the path according to the github path, but there are only headers… where can I find cpp files? I am using anaconda, would it be affecting?

You would need to git clone the repository, manipulate the file, and build PyTorch from source.
You cannot manipulate C++ files in a pre-built binary and expect to see changes without compiling the code again.

So… if I just download pytorch from their website in local or conda, I cannot find the cpp files, but headers? I have to clone their github and build pytorch by myself?

Yes, since C++ is a compiled language you would need to clone the repository and rebuild PyTorch from source. Any changes in .cpp files won’t be visible in the compiled binary.

1 Like

Thanks a lot! And could you please let me know any resources about how to recompile those c++ files after making changes to apply for pytorch?

You can follow these instructions from the repository.

1 Like

Hello,

I’m revisiting this discussion because I’m tackling a similar challenge. I’m working on a specialized version of a GRU, incorporating an internal attention mechanism within the cell. The obstacle I face is that the mathematical operations seem to be implemented in C++ rather than Python, complicating direct modifications in the Python layer.

It seems that I would need to clone the raw pytorch repo from github,make my changes, and then compile it?

Assuming that’s the case, I have a concern: my existing setup depends on PyTorch installed via pip. How would my existing codebase recognize and utilize the locally modified version of PyTorch? What steps are required to ensure that my modifications are acknowledged by my current PyTorch-dependent code?

Thank you for your insights.

You would uninstall the already installed PyTorch binary and would then follow the linked build instructions (after manipulating the code). You might want to use incremental build during the development stage to avoid full rebuilds. The contributing guide gives you more information about it.

1 Like

thank you very much for your help.

I found this tutorial as well: https://pytorch.org/tutorials/advanced/cpp_extension.html. Would it work?

Yes, writing a custom extension would also work.

1 Like

If I decide to download pytorch from source and modify my GRU gates, would building c files of my pytorch the only thing to do? Should I expect to change some kernels if I had a gate to my GRU?

I assume you want to manipulate the internal GRUCell implementation from here. If so, you would need to manipulate this C++ file and rebuild PyTorch from source.

1 Like

Thank you for your answer.

I now have installed the source code of PyTorch. I have pip install --verbose /pytoch in my venv.

Now, if I understood well, I would need to go in here and do my changes to the gate. After this I would need to redo pip install --verbose /pytoch in my venv? right? If everything went well, while calling nn.GRU, it should call the modified gru cell within the C++ file?

While changing the GRU cell, is there a cpu RNN.cpp and a cuda GPU RNN.cpp implementation? Inside the GRU cell, should I worry about changing kernels, tensors, etc? I just need to implement the following where the red lines are the added bit.
Screenshot from 2024-04-02 23-43-51

Follow this link [Build from Source - PyTorch] to build from source. Generally when you want to make core edits the cleanest way is to have the repo locally using git clone , then do necessary changes and the follow the same instruction to build from source again. As in your case you are changing the c++ files (changing python files won’t require building from source)

Hello I am to the point of changing the C++ code located in here for the diagram implementation above. Since there are 2 more gates (ReLU and Softmax), I am ensure how to implement it in C++.

I would really much appreciate if somebody could help me implement it as I am ensure how to manipulate the c++ code. I come from a python background and I am a bit lost.

Cheers @sprakashdash @ptrblck

Well if you are terrified of c++ scripts, you can try modifying the python script located here.
Do note that I have shared a specific branch’s version so check the path and try to locate the same block in your local pytorch version. And by the way pytorch works for both CPU and GPU when you make changes to this code (in most of the cases).
Best

Hi,

Sorry for the delay.

I am still convinced that I can do it by modifying the raw c++ file. i will guide you through what I have done so you can fully understand and hopefully guide me through the right direction.

So I re-iterate my problem: I would like to change the internal structure of a GRU cell to implement what we call a recurrent attention unit (RAU) (here) as it can be seen in the above image. It is just a matter of adding those two gates (ReLU and Softmax).

  1. I have downloaded pytorch from source by executing the following command: git clone --recursive https://github.com/pytorch/pytorchcd pytorchgit submodule syncgit submodule update --init --recursive. After this I executed: export _GLIBCXX_USE_CXX11_ABI=1.

  2. When the installation was finished, I went to pytorch/aten/src/ATen/native/RNN.cpp and I have changed the following structure to implement my RAU:

template <typename cell_params>
struct GRUCell : Cell<Tensor, cell_params> {
  using hidden_type = Tensor;
 hidden_type operator()(
      const Tensor& input,
      const hidden_type& hidden,
      const cell_params& params,
      bool pre_compute_input = false) const override {
    if (input.is_cuda() || input.is_xpu() || input.is_privateuseone()) {
      TORCH_CHECK(!pre_compute_input);
      auto igates = params.matmul_ih(input);
      auto hgates = params.matmul_hh(hidden);
      auto result = at::_thnn_fused_gru_cell(
          igates, hgates, hidden, params.b_ih(), params.b_hh());
      // Slice off the workspace argument (it's needed only for AD).
      return std::move(std::get<0>(result));
    }
    const auto chunked_igates = pre_compute_input
        ? input.unsafe_chunk(5, 1)
        : params.linear_ih(input).unsafe_chunk(5, 1);
    auto chunked_hgates = params.linear_hh(hidden).unsafe_chunk(5, 1);
    const auto reset_gate =
        chunked_hgates[0].add_(chunked_igates[0]).sigmoid_();
    const auto input_gate =
        chunked_hgates[1].add_(chunked_igates[1]).sigmoid_();
    const auto new_gate =
        chunked_igates[2].add(chunked_hgates[2].mul_(reset_gate)).tanh_();
    const auto attention_gate_ReLU =
        chunked_hgates[3].add(chunked_igates[3]).relu_();
    const auto attention_gate_softmax =
        at::softmax(chunked_hgates[4].add(chunked_igates[4]), /*dim=*/-1);
    auto gru_normal = (hidden - new_gate).mul_(input_gate).add_(new_gate);
    auto rau = gru_normal + attention_gate_ReLU.mul(attention_gate_softmax);
#warning "C Preprocessor got here!"
    return rau; 
  }
};

I have also added the following dependencies for softmax:

include <ATen/ops/_log_softmax.h>
#include <ATen/ops/_log_softmax_backward_data_native.h>
#include <ATen/ops/_log_softmax_native.h>
#include <ATen/ops/_masked_softmax_backward_native.h>
#include <ATen/ops/_masked_softmax_native.h>
#include <ATen/ops/_softmax.h>
#include <ATen/ops/_softmax_backward_data_native.h>
#include <ATen/ops/_softmax_native.h>
#include <ATen/ops/empty.h>
#include <ATen/ops/empty_like.h>
#include <ATen/ops/log_softmax.h>
#include <ATen/ops/log_softmax_native.h>
#include <ATen/ops/softmax.h>
#include <ATen/ops/softmax_native.h>
#include <ATen/ops/special_log_softmax_native.h>
#include <ATen/ops/special_softmax_native.h>
  1. I have created an virtual environment in which i will build the pytorch module:
    python3 -m venv ./test2
    (test2) $ pip install --verbose pytorch/.

When I do this everything compiles very well and the warning message within the GRUCell struct fires on meaning that it compiles the right RNN.cpp file and not other dependencies.

Within my test2 venv I have two files which have been created: one torch and the other one torch-2.4.0a0+git8aa08b8.dist-info

Now that everything is find I have checked that the torch module within my test2 venv was effectively communicating to my local source pytorch with my changes:

(test2) python3
>> import torch
>> print(torch.__version__)
2.4.0a0+git8aa08b8

it means that it is actually taking into account the local version of pytorch that I have downloaded.

Now let’s go in the python code side.

The goal is that when I call my nn.GRU that it takes into account my new GRU cell modified in my local pytorch version. Because the gates have changed from 3 to 5, I would need, within rnn.py to modified this line:

elif mode == 'GRU':
            gate_size = 3 * hidden_size

to this line:

elif mode == 'GRU':
            gate_size = 5 * hidden_size

However, it throws me an error.

My question is that i don’t know if I have missed any steps and I need help to understand what I am doing wrong.

cheers

What kind of error are you seeing as your workflow sounds valid?