Tensor and Packedaccessor for int tensors

Hi,

I am very new to the c++/cuda pytorch API and I would like to manipulate tensors containing integers (int64). I was wondering how to do so. Also the PackedTensorAccessor don’t seems to accepts int as first argument.

So just two examples, first instantiate a 0-valued tensor of type int:

torch::Tensor test = torch::zeros({5}, int);

or trying to get an accessor from an existing tensor:

torch::Tensor test = torch::zeros({5});
torch::PackedTensorAccessor<int, 1, torch::RestrictPtrTraits, size_t> acc = test.packed_accessor<int, 1, torch::RestrictPtrTraits, size_t>();

I also tried:

torch::Tensor test = torch::zeros({5});
torch::PackedTensorAccessor<torch:kInt, 1, torch::RestrictPtrTraits, size_t> acc = test.packed_accessor<torch:kInt, 1, torch::RestrictPtrTraits, size_t>();

I get error for such lines. I guess I must be missing something very obvious, can you help me?

Thank you in advance,

Samuel

1 Like

You need to match the scalar type to the tensor’s scalar_type, so:
If you instantiated a torch::kLong tensor (that’s 64 bit integers), use int64_t. For kInt (32 bit) use int32_t, for nothing (aka kFloat), float.

Note that you usually do not need packed_accessor for CPU tensors, they are intended for passing to CUDA kernels. accessor will do for CPU (pointing to the Tensor’s stride and size arrays) - unless you want 32 indexing (on ARM?, probably not on x86).
(And you can use auto for less repetitive code.)

Best regards

Thomas

2 Likes

Thanks a lot for your answer Thomas it helped a lot.

I am actually implementing a Cuda extension this is why I am using packed_accessor.

Best regards,

Samuel

Hi Tom, how can I implement bfloat16 in ATen with a packed_accessor? For floats the ATen definition and packed_accessor is (e.g.):

auto X = torch::zeros({a, b}, torch::dtype(torch::kF32))
X.packed_accessor32<float,2,torch::RestrictPtrTraits>()

but what should it read for bfloat16? I.e. what should come at the question marks below:

auto X = torch::zeros({a, b}, torch::dtype(torch::[?]))
X.packed_accessor32<[?],2,torch::RestrictPtrTraits>()

I can’t find the ATen reference for bfloat16 here, nor do I know what the packed_accessor reference should be…

Sometimes you search for something for hours only to find the answer shortly after asking the question :expressionless: The following is the answer:

auto X = torch::zeros({a, b}, torch::dtype(at::ScalarType::BFloat16))
X.packed_accessor32<at::BFloat16,2,torch::RestrictPtrTraits>()
2 Likes