It means that your tensor is not a single block of memory, but a block with holes. view can be only used with contiguous tensors, so if you need to use it here, just call .contiguous() before.

You can find some more details on the memory layout in numpy docs. Torch uses the same representation.

Thanks this helped.
Can you tell what causes this? In what cases the variables are not contiguous and if there is a way to ensure apriori that a variable takes up contiguous memory.

Use the tensor.contiguous() function. If tensor non-contiguous, it’ll return a contiguous copy. If it’s already contiguous, it’ll return the original tensor.

A classical way to obtain this bug is to use transposition. If you do

x = torch.Tensor(5,2)
y = x.t()

Then, the storage of y is still the same than the one of x. You can check:

x.fill_(0)
0 0
0 0
0 0
0 0
0 0
[torch.FloatTensor of size 5x2]
y
0 0 0 0 0
0 0 0 0 0
[torch.FloatTensor of size 2x5]

That way, y[0,4] exists, but it does not exist in the storage of y, that is the storage of x. So y is not contiguous (y[0,4] is missing in the storage). If you try:

y.view(-1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: input is not contiguous at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensor.c:231

So you have to make y contiguous with tensor.contiguous()

is calling .view on tensors also a reason why this bug appears?

I am getting this bug from an RNN…I am doing a bunch of reshapings with view:

xn_lstm = torch.cat((loss_prep, err_prep, grad_prep), 1).unsqueeze(0) # [n_learner_params, 6]
# normal lstm([loss, grad_prep, train_err]) = lstm(xn)
n_learner_params = xn_lstm.size(1)
(lstmh, lstmc) = hs[0] # previous hx from first (standard) lstm i.e. lstm_hx = (lstmh, lstmc) = hs[0]
if lstmh.size(1) != xn_lstm.size(1): # only true when prev lstm_hx is equal to decoder/controllers hx
# make sure that h, c from decoder/controller has the right size to go into the meta-optimizer
expand_size = torch.Size([1,n_learner_params,self.lstm.hidden_size])
lstmh, lstmc = lstmh.squeeze(0).expand(expand_size), lstmc.squeeze(0).expand(expand_size)
lstm_out, (lstmh, lstmc) = self.lstm(input=xn_lstm, hx=(lstmh, lstmc))