Difference between torch::Tensor and at::Tensor

I dont quite get the difference between those two tensor entities. I roughly remember to have read somewhere that at::Tensor should best be avoided for some reason I cannot recall, but what is the general advise here?
thanks in advance

at::Tensor is not differentiable while torch::Tensor is. It was similar to the difference between Variables and pure tensors in Python pre 0.4.0.
As far as I know torch::Tensors won’t have any overhead in using them even if you don’t need to differentiate them, so that might be the reason to prefer the torch namespace for creating tensors.

1 Like

Hi,

I feel like even in “official” code those two are used interchangeably (inconsitently?). Lets take for example cannonical tutorial https://pytorch.org/tutorials/advanced/cpp_extension.html#writing-the-c-op

Header of forward pass returns std::vector<at::Tensor> (and where is the casting inside?):

std::vector<at::Tensor> lltm_forward(
    torch::Tensor input,
    torch::Tensor weights,
    torch::Tensor bias,
    torch::Tensor old_h,
    torch::Tensor old_cell)

and then later header of backward pass returns std::vector<torch::Tensor>

std::vector<torch::Tensor> lltm_backward(
    torch::Tensor grad_h,
    torch::Tensor grad_cell,
    torch::Tensor new_cell,
    torch::Tensor input_gate,
    torch::Tensor output_gate,
    torch::Tensor candidate_cell,
    torch::Tensor X,
    torch::Tensor gate_weights,
    torch::Tensor weights)

Regards,

Another example are brand new cpu ops in torchvision:


written purely using at::Tensor (actually I even see also TH/TH.h headers here, but no torch::Tensor)