In particular, patches_list is a very big torch and my GPU memory is only 4G. I guess the possible reason is that the method can’t find a contiguous GPU memory to store it. But in fact, I never received any error. It makes me confused.
Hi, .contiguous() is not an in-place operation (it does not have _ in its name), that means that you should do patches_list = patches_list.contiguous().
Contiguous will raise an error if it fails !
In your example you are permuting a and assigning the result to b, so a is unchanged and it remains contiguous, while b gets the permuted tensor and so it is not contiguous.