Nvidia Xavier AGX torch.cuda.is_available() returns false

On the Nvidia Xavier AGX A false is returned with the command torch.cuda.is_available().

Here are the python commands

import torch
torch.version
‘1.9.0’
x = torch.rand(5, 3)
print(x)
tensor([[0.2251, 0.7957, 0.5024],
[0.0310, 0.7917, 0.1989],
[0.5037, 0.4068, 0.3340],
[0.0275, 0.4699, 0.5500],
[0.8979, 0.1096, 0.2719]])
torch.version.cuda
torch.cuda.is_available()
False

*********** At the Ubuntu prompt I get this for Cuda
Command: cat /usr/local/cuda/version.txt
Returns: CUDA Version 10.2.300


Command: ./deviceQuery
Returns:

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: “Xavier”
CUDA Driver Version / Runtime Version 10.2 / 10.2
CUDA Capability Major/Minor version number: 7.2
Total amount of global memory: 15817 MBytes (16584876032 bytes)
( 8) Multiprocessors, ( 64) CUDA Cores/MP: 512 CUDA Cores
GPU Max Clock rate: 1377 MHz (1.38 GHz)
Memory Clock rate: 1377 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 524288 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: Yes
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 0 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS


I am using Python 3.6
Torch 1.9.0
torchaudio 0.10.0
torchvision 0.10.0
Cuda 10.2

pandas 1.1.5
numpy 1.19.4

Any help would be greatly appreciated

Where did you get the PyTorch from and what does collect_env.py say?
You can take it from your PyTorch distribution or use these instructions from the bug report template:

wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py

Best regards

Thomas

First let me say I really appreciate you responding back hopefully you can help

I got pytorch from the pythorch website Start Locally | PyTorch
This what I used
Start Locally | PyTorch

pip3 install torch torchvision torchaudio
PyTorch Build
Your OS Linux
Package Pip
Language Python
Compute Platform CUDA 10.2
Run this Command:
pip3 install torch torchvision torchaudio

This is what I get form collect

Collecting environment information…
PyTorch version: 1.9.0
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A

OS: Ubuntu 18.04.5 LTS (aarch64)
GCC version: (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.25

Python version: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] (64-bit runtime)
Python platform: Linux-4.9.253-tegra-aarch64-with-Ubuntu-18.04-bionic
Is CUDA available: False
CUDA runtime version: 10.2.300
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.19.4
[pip3] torch==1.9.0
[pip3] torchaudio==0.10.0
[pip3] torchvision==0.10.0
[conda] Could not collect

I think the ARM64 builds from PyTorch.org don’t include CUDA, so you would need the libraries from NVIDIA
PyTorch for Jetson - version 1.10 now available - Jetson Nano - NVIDIA Developer Forums

(I also built packages for it, but you’ll likely be happier with the ones from NVIDIA.)

Best regards

Thomas

Thank you very very much Thomas when I do these commands I know get

torch.cuda.is_available()
True
torch.zeros(1).cuda()
tensor([0.], device=‘cuda:0’)

I think the problem is I was using torch 1.9 when I used what you did they downloaded version 1.8.
Not sure but it works. Thanks

Collecting environment information…
PyTorch version: 1.8.0
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A

OS: Ubuntu 18.04.5 LTS (aarch64)
GCC version: (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.25

Python version: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] (64-bit runtime)
Python platform: Linux-4.9.253-tegra-aarch64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 10.2.300
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.19.4
[pip3] torch==1.8.0
[pip3] torchaudio==0.10.0
[pip3] torchvision==0.10.0
[conda] Could not collect