tgsmdww
(Alter)
June 9, 2020, 1:44am
1
Tensor empty_cpu(IntArrayRef size, const TensorOptions& options_, c10::optional<c10::MemoryFormat> optional_memory_format) {
......
auto memory_format = options.memory_format_opt().value_or(MemoryFormat::Contiguous);
tensor.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
return tensor;
}
Here tensor.options().has_memory_format is false. When I want to copy tensor to CUDA, which will call “to_impl”, it will come into
if (memory_format == MemoryFormat::Preserve) {
if (self.is_non_overlapping_and_dense()) {
// Copy all strides
auto r = at::empty_strided(self.sizes(), self.strides(), options.memory_format(c10::nullopt));
r.copy_(self, non_blocking);
return r;
} else {
memory_format = self.suggest_memory_format();
}
}
Must I use tensor.to("CUDA", memory_format=torch.contiguous_format)
to set it’s memory_format to Contiguous?
The memory_format
seems to be respected in this code snippet:
x = torch.empty(2, 3, 4, 5, memory_format=torch.contiguous_format)
print(x.is_contiguous(memory_format=torch.contiguous_format))
print(x.is_contiguous(memory_format=torch.channels_last))
y = x.to('cuda')
print(y.is_contiguous(memory_format=torch.contiguous_format))
print(y.is_contiguous(memory_format=torch.channels_last))
x = torch.empty(2, 3, 4, 5, memory_format=torch.channels_last)
print(x.is_contiguous(memory_format=torch.contiguous_format))
print(x.is_contiguous(memory_format=torch.channels_last))
y = x.to('cuda')
print(y.is_contiguous(memory_format=torch.contiguous_format))
print(y.is_contiguous(memory_format=torch.channels_last))
Could you post a code snippet, which would break it, please?
tgsmdww
(Alter)
June 16, 2020, 1:56am
3
I mean tensor doesn’t have default memory_format unless we explicitly specify his memory_format.
The default format is torch.contiguous_format
, so you wouldn’t have to specify it.
tgsmdww
(Alter)
June 16, 2020, 3:51am
5
I don’t understand why tensor.options().has_memory_format() is always false no matter I set it’s memory_format or not.
I’m not sure I understand the use case correctly.
Based on your first code snippet it seems you are trying to create a contiguous tensor, so you wouldn’t need to pass a specific memory format to the initialization of the tensor.
Let me know, if I misunderstand the use case.
tgsmdww
(Alter)
June 17, 2020, 2:21am
7
I only want to know how to make tensor.options().has_memory_format()
return true.
This flag would be true
, if you explicitly create a TensorOptions
object and set the memory format as shown in this example:
import torch
import torch.nn as nn
from torch.utils import cpp_extension
cuda_source = """
std::vector<torch::Tensor> my_fun(void)
{
auto options = at::TensorOptions(at::MemoryFormat::ChannelsLast);
auto out = torch::ones({2, 3, 4, 5}, options);
std::cout << std::boolalpha << options.has_memory_format() << std::endl;
return {out};
}
"""
cpp_source = """
std::vector<torch::Tensor> my_fun(void);
"""
module = torch.utils.cpp_extension.load_inline(
name="cuda_test_extension",
cpp_sources=cpp_source,
cuda_sources=cuda_source,
functions="my_fun",
verbose=True,
)
out = module.my_fun()