Iterate over a Tensor

for j in range(sequence['input'].size(2) - 1):
                inputs = sequence['input'][:, :, j:j+2, :, :].cuda(args.gpu, non_blocking=True)
                t = sequence['target'][:, :, j+1, :, :].cuda(args.gpu, non_blocking=True)

I am trying to iterate over a Tensor but I get the following error:

RuntimeError: invalid argument 3: Source tensor must be contiguous at ../src/THC/generic/THCTensorCopy.c:114

Do you know which is the problem?

I guess the problem is, that indexing makes the tensors non-contiguous (meaning they are not in neighboring memory cells). You could give it a try with

for j in range(sequence['input'].size(2) - 1):
                inputs = sequence['input'][:, :, j:j+2, :, :].contiguous().cuda(args.gpu, non_blocking=True)
                t = sequence['target'][:, :, j+1, :, :].contiguous().cuda(args.gpu, non_blocking=True)

This snippet may realllocate some memory. Usually contiguous tensors become non-contiguous by some view/reshaping-operations (and maybe indexing)

Thanks, using contiguous() helped solving the error. I should check what is actually doing because I have the feeling that it is going to slow down the training. If every time I index the Tensor it reallocates memory in order to make it contiguous there is something wrong going on.