Is there a transpose operation in RNN parameters?

I read the RNN API doc that the parameters are:

weight_ih_l[k]: the learnable input-hidden weights of the k-th layer,
            of shape `(input_size x hidden_size)`
weight_hh_l[k]: the learnable hidden-hidden weights of the k-th layer,
            of shape `(hidden_size x hidden_size)`
bias_ih_l[k]: the learnable input-hidden bias of the k-th layer,
            of shape `(hidden_size)`
bias_hh_l[k]: the learnable hidden-hidden bias of the k-th layer,
            of shape `(hidden_size)`

but I see the RNNBase code, the parameters are created by:

layer_input_size = input_size if layer == 0 else hidden_size * num_directions
gate_size = hidden_size
w_ih = Parameter(torch.Tensor(gate_size, layer_input_size))
w_hh = Parameter(torch.Tensor(gate_size, hidden_size))
b_ih = Parameter(torch.Tensor(gate_size))
b_hh = Parameter(torch.Tensor(gate_size))

So, is there a transpose operation?
I am so confused.