[Bug?]Two tensors have the same data_ptr

Name Version Build Channel

pytorch 1.7.1 py3.8_cuda110_cudnn8_0 pytorch

Codes:

import torch

device = 'cuda:0'
x = torch.rand([2, 3, 4], device=device).transpose(0, 2)
y = torch.zeros_like(x)
args_list = [x, y]
ret_list = []
for item in args_list:
    if isinstance(item, torch.Tensor):
        item = item.contiguous()
        ret_list.append(item.data_ptr())
print(ret_list)

Outpus:

[47284487168, 47284487168]

x and y are two different tensors, but they have the same data_ptr.

You are creating new tensors in:

item = item.contiguous()

and replace them directly in each iteration, which could (and in your case will) reuse the data_ptr.
If you print item.data_ptr() of x and y or item before the contiguous() call you would see that they are different.

1 Like

Thanks! I have another question, which is how to set contiguous for some tensors? For example:


import torch

def set_contiguous(args_list: list):
    for i in range(args_list.__len__()):
        args_list[i] = args_list[i].contiguous()

device = 'cuda:0'
x = torch.rand([2, 3, 4], device=device).transpose(0, 2)
y = torch.zeros_like(x)
args_list = [x, y]
set_contiguous(args_list)
print(x.is_contiguous())

The outputs are:
False

As you said, args_list[i] = args_list[i].contiguous() creates new tensor, and x, y are not set to contiguous tensors. How can I define a correct set_contiguous function?

The entries in args_list are contiguous, since you are assigning the new tensor to it:

print(args_list[0].is_contiguous())
> True
1 Like

Thanks! It seems hard to find a real ‘inplace’ function to make ‘x, y’ contiguous. Now I use the outputs to overlap inputs, which behaves like ‘inplace’:

import torch

def set_contiguous(*args):
    ret = []
    for item in args:
        ret.append(item.contiguous())
    return ret
device = 'cuda:0'
x = torch.rand([2, 3, 4], device=device).transpose(0, 2)
y = torch.zeros_like(x)
x, y = set_contiguous(x, y)
print(x.is_contiguous())