Where did GRU implement?

I want to know how GRU is implemented in pytorch. But I can’t find the implementation of GRU although I looked in here. Can anyone tell me where did GRU implement in pytorch?

Hi,

Are you looking for the source code? If so you can find it here.

1 Like

The “native” implantations are in
https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/RNN.cpp and
https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cuda/RNN.cu
There also are CuDNN bindings…

1 Like

Thank @tom for your reply.
I am a novice in C+ and although I looked for the how does narrow works, I still do not know clearly what does hidden_slice return.

Tensor hidden_slice(const Tensor& t, int64_t start, int64_t end) {
  return t.narrow(0, start, end - start);
}
tpair_of<Tensor> hidden_slice(const tpair_of<Tensor>& t, int64_t start, int64_t end) {
  return std::make_tuple(hidden_slice(std::get<0>(t), start, end),
                         hidden_slice(std::get<1>(t), start, end));
}

Can you please explain me what does hidden_slice return.

So it’s overloaded: calling it with a pair (second variant) will return a tuple with sliced elements by calling the first variant on each of the two members.

Best regards

Thomas

Thank you @tom :smiley::smiley: