Bugs on ".cuda()"

When I use " .cuda()", something gets wrong.

>>> import torch
>>> A = torch.tensor([1,2])
>>> B = A.cuda()
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1579040055865/work/aten/src/THC/THCGeneral.cpp line=50 error=71 : operation not supported
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/lzy/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/cuda/__init__.py", line 197, in _lazy_init
RuntimeError: cuda runtime error (71) : operation not supported at /opt/conda/conda-bld/pytorch_1579040055865/work/aten/src/THC/THCGeneral.cpp:50

This error is sometimes raised by a wrong usage of multiprocessing and CUDA, but this doesn’t seem to be the case here.
What does torch.cuda.is_available() return, which PyTorch and CUDA runtime versions are you using, and which GPU?

>>> torch.cuda.is_available() 
>>> print(torch.__version__)

cat /usr/local/cuda/version.txt 
CUDA Version 10.0.130

nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130

My gpu is 1080Ti.

It seems like there is no use of cuda.

Your local CUDA toolkit will not be used, if you installed the conda binaries or pip wheels.
Did you properly select the desired CUDA runtime here, which should ship in the binaries? If so, what NVIDIA driver are you using?