Hi,

I’m currently making text-encoder where the inputs are the outputs from hidden activations of CNNs.

As far as I know, torch.nn.RNN calculates some values at a single step. Is this right? (1)

So If I want to implement multi-step RNNs, which would be the most of the case, should I always have to forward input and hidden activations calculated from the previous step, several times with for loop? (2)

Then, in my case (3), where

Input size : Batch x TimeStep x FeatDim

I add a layer like

def __

init__(self, ):

# After added CNN Blocks

self.myRNN = nn.RNN(input_size = FeatDim, hidden_size=256, bias=False, nonlinearity=‘ReLU’, batch_first=True)

def __

___(self, x)forward

# After forwarding inputs through CNN Blocks

hidden = torch.zeros(1, batch_size, 256).cuda()

for i in range(1, 8):

out, hidden = self.myRNN(x[:, i, :].unsqueeze(1), hidden)

out = out[:, -1, :]

and there’s no error. Is it okay to implement like this?