I think Named Tensors is a very promising concept. However, I wonder about its status in PyTorch. It has been around for quite a while (a few years, not sure since when exactly). But it still says:
WARNING: The named tensor API is a prototype feature and subject to change.
Why is this still the case? Why is it not stable by then? Has it actually ever changed? What are the plans?
And also, I just tried to use it, and for a very primitive op like tensor.permute
, I got:
RuntimeError: aten::permute is not yet supported with named tensors. Please drop names via
tensor = tensor.rename(None)
, call the op with an unnamed tensor, and set names on the result of the operation.
(Feature request here.)
So I wonder, how complete is this support really? What other low-level ops are not supported? When is support for that planned? I know the list here but I’m not sure how much percentage that really is, and what is actually missing.
When searching here for “named tensors”, I don’t see too much activity. Here is one, also stumbling upon many unsupported primitives, like stack
, or pin_memory
.
And also, when reading through public PyTorch code, I rarely see named tensors in use. Maybe because its support is very incomplete?
And also, the type information on functions like torch.tensor
do not cover the names
argument, so editors show warnings about unexpected arguments.
I would like to use it, however, now I’m reconsidering this, when I have to add ugly workaround code for every second op, function or module which does not properly support it.