# Shape section in documentation

I noticed in CosineEmbeddingLoss — PyTorch 2.0 documentation and multiple other functions that 1-D tensors are noted as (N) but pythonically it would be as (N,) while scalar(0-D) would be (N).

It can be mentioned in the torch.tensor page about the form.

Wouldn’t scalars be `()` as also noted in the docs?

``````from torch import tensor
t = tensor(3).detach() # scalar
print(t)
print(t.shape)
t = tensor((3)).detach() # scalar
print(t)
print(t.shape)
t = tensor((3,)).detach()# 1D
print(t)
print(t.shape)
t = torch.tensor(()).detach()# 1D
print(t)
print(t.shape)
``````

I ran the following as an example. () would be an empty 1-D vector not a scalar.

I see the empty shape for the scalar use case:

``````t = tensor(3) # scalar
print(t)
# tensor(3)
print(t.shape)
# torch.Size([])
``````
``````t = torch.tensor(()).detach()# 1D
print(t)
#tensor([])
print(t.shape)
#torch.Size()
``````

Yes, but , () shows as size  - an empty 1-D
but scalars (N) are empty 0-D. This isn’t clear in docs.

``````t = tensor((3)).detach() # scalar

print(t)

#tensor(3)

print(t.shape)

#torch.Size([])
``````

See above,
(N) when N is 3 and that is a scalar, not 1-D while the link has (N) as 1-D.