Question about implmentation difference

I am currently working on the C++ API provided by pytorch. I have some confusion about at::Tensor and torch::Tensor, according to the docs in PyTorch C++ API — PyTorch master documentation they should be different. However, according to my experiments codes bellow(sorry this might be my first time to post question, i do not know how to highlight codes):

#include <iostream>
#include <torch/torch.h>
#include <type_traits>

int main() {
  at::Tensor a = torch::rand({2, 2}, torch::requires_grad());
  std::cout << a << std::endl;
  std::cout << "is same type: "
            << std::is_same<torch::Tensor, at::Tensor>::value << "\n";
  at::Tensor b = torch::rand({2, 2}, torch::requires_grad());
  auto c = a + b;
  std::cout << a.grad();
the output is 
 0.9828  0.3172
 0.7801  0.3766
[ CPUFloatType{2,2} ]
is same type: 1
 1  1
 1  1
[ CPUFloatType{2,2} ]

it seems at::Tensor is the same as torch::Tensor. I am wondering how the pytorch team achieves this? by namespace aliasing? because they are in different namespace.
Hope someone can answer my questions.