RuntimeError: input is not contiguous

When i reshape a tensor (if the rank is changed), i get the following error.

x = ...  # Tensor of shape (100, 20)
x.view(-1)   # expect a tensor of shape (2000)

RuntimeError: input is not contiguous

What does ‘contiguous’ mean and why does this error occur?

5 Likes

It means that your tensor is not a single block of memory, but a block with holes. view can be only used with contiguous tensors, so if you need to use it here, just call .contiguous() before.

You can find some more details on the memory layout in numpy docs. Torch uses the same representation.

27 Likes

Can you tell when we get this error? I am facing this problem but not always. I am just wondering why this is happening occasionally?

3 Likes

Thanks this helped.
Can you tell what causes this? In what cases the variables are not contiguous and if there is a way to ensure apriori that a variable takes up contiguous memory.

1 Like

Use the tensor.contiguous() function. If tensor non-contiguous, it’ll return a contiguous copy. If it’s already contiguous, it’ll return the original tensor.

2 Likes

A classical way to obtain this bug is to use transposition. If you do

x = torch.Tensor(5,2)
y = x.t()

Then, the storage of y is still the same than the one of x. You can check:

x.fill_(0)
 0  0
 0  0
 0  0
 0  0
 0  0
[torch.FloatTensor of size 5x2]

y
 0  0  0  0  0
 0  0  0  0  0
[torch.FloatTensor of size 2x5]

That way, y[0,4] exists, but it does not exist in the storage of y, that is the storage of x. So y is not contiguous (y[0,4] is missing in the storage). If you try:

y.view(-1)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: input is not contiguous at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensor.c:231

So you have to make y contiguous with tensor.contiguous()

20 Likes

Thanks. That makes complete sense now.

Why is it not good idea to add automatic tensor.contiguous() to transpose function?

I guess, because it saves a lot of memory to keep one storage for each tensor, while you don’t apply any reshaping transformation.

So, in the following case

z=y.contiguous().view(-1) , will the gradients of some error with respect to z backpropgate to x?.

1 Like

@zakaria_laskar: yes, gradient back propagate correctly through the view(-1) and through the contiguous call.

3 Likes

thanks for your clear and useful reply ~~~

is calling .view on tensors also a reason why this bug appears?

I am getting this bug from an RNN…I am doing a bunch of reshapings with view:

        xn_lstm = torch.cat((loss_prep, err_prep, grad_prep), 1).unsqueeze(0) # [n_learner_params, 6]
        # normal lstm([loss, grad_prep, train_err]) = lstm(xn)
        n_learner_params = xn_lstm.size(1)
        (lstmh, lstmc) = hs[0] # previous hx from first (standard) lstm i.e. lstm_hx = (lstmh, lstmc) = hs[0]
        if lstmh.size(1) != xn_lstm.size(1): # only true when prev lstm_hx is equal to decoder/controllers hx
            # make sure that h, c from decoder/controller has the right size to go into the meta-optimizer
            expand_size = torch.Size([1,n_learner_params,self.lstm.hidden_size])
            lstmh, lstmc = lstmh.squeeze(0).expand(expand_size), lstmc.squeeze(0).expand(expand_size)
        lstm_out, (lstmh, lstmc) = self.lstm(input=xn_lstm, hx=(lstmh, lstmc))

error:

RuntimeError: rnn: hx is not contiguous