Output of RNN is not contiguous

I would expect the output of RNN to be contiguous in memory. This doesn’t seem to be the case. For instance, the final output in this snippet has output.is_contiguous() == False.

train = True
num_layers = 1
bidirectional = True
bi = 2 if bidirectional else 1

x = Variable(torch.from_numpy(_x), volatile=not train)
batch_size, seq_length, input_dim = x.size()

rnn = nn.LSTM(input_dim, model_dim / bi, num_layers,
    batch_first=True,
    bidirectional=bidirectional,
    )

h0 = Variable(torch.zeros(num_layers * bi, batch_size, model_dim / bi), volatile=not train)
c0 = Variable(torch.zeros(num_layers * bi, batch_size, model_dim / bi), volatile=not train)

print(x.is_contiguous())
# True

# Expects (input, h_0):
#   input => batch_size x seq_length x model_dim
#   h_0   => (num_layers x bi[1,2]) x batch_size x model_dim
#   c_0   => (num_layers x bi[1,2]) x batch_size x model_dim
output, (hn, cn) = self.encode(x, (h0, c0))

print(output.is_contiguous())
# False

Yeah I think that’s expected. Depending on the chosen backend, a contig or non-contig result may be returned. Why is that a problem?

Okay. Noticed the same behavior for on cpu/gpu. I don’t have a specific problem, but assumed that if input is contiguous then output should/would be as well. Thanks for the response!

No, I don’t think we’ve every guaranteed that. I’ll take a look at RNNs anyway, thanks for the notice!

1 Like