Illegal instruction (core dumped) for Cuda in

Get “Illegal instruction (core dumped)” error when trying to copy object in CUDA memory. I tried with python 3.6 and 3.7, CUDA 9.0 and 9.2. I have no idea of how to debug this.

This code works fine with pytorch 0.4.1 but always fails in

import torch

Any ideia of how I can solve this?

GDB output:

(gdb) run
Starting program: /home/marco/anaconda3/envs/fastai/bin/python
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/".
[New Thread 0x7fffae733700 (LWP 4189)]
GeForce GTX 1070

Thread 1 "python" received signal SIGILL, Illegal instruction.
0x00007fffb9057bc3 in at::cuda::detail::initGlobalStreamState() ()
   from /home/marco/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/lib/

PyTorch version: 1.0.0.dev20181003
Is debug build: No
CUDA used to build PyTorch: 9.2.148

OS: Ubuntu 18.04.1 LTS
GCC version: (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
CMake version: Could not collect

Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.1.85
GPU models and configuration: GPU 0: GeForce GTX 1070
Nvidia driver version: 396.54
cuDNN version: Could not collect

Versions of relevant libraries:
[pip] numpy (1.15.2)
[pip] torch (1.0.0.dev20181003)
[conda] cuda92 1.0 0 pytorch
[conda] pytorch-nightly 1.0.0.dev20181003 py3.6_cuda9.2.148_cudnn7.1.4_0 [cuda92] pytorch

This is weird. Thanks for reporting it. Can you report the output of the following GDB commands after the “Thread 1 “python” received signal SIGILL, Illegal instruction”?

bt (backtrace)
disas (disassemble)

Do you know what CPU you have? On Linux, you can usually find out by cat /proc/cpuinfo

The CPU is an AMD Phenom II X6

GDB output:


Thanks, I think this PR will fix the problem:

I’ll work on merging it. Should be in a nightly build within a few days.

1 Like

Thanks for looking at this.

I built from source with the PR you linked.

I still got the “Illegal instruction (core dumped)” error, but it’s different this time:

Thread 1 "python" received signal SIGSEGV, Segmentation fault.
THCPModule_initExtension (self=<optimized out>) at torch/csrc/cuda/Module.cpp:354
354       auto _state_cdata = THPObjectPtr(PyLong_FromVoidPtr(state));

GDB complete output:

Is still the same problem or did I mess up while building from source?

Thanks for trying out the PR. I’m not sure exactly what’s going on, but that’s a different error (“Segmentation fault” vs. “Illegal instruction”). If you’re building from source, make sure you run python clean before you rebuild. Sometimes, only some files get rebuilt which can cause those sorts of crashes.

That was the first time I built. I will do that before rebuild.

Can I do anything to help?

Could you try rebuilding with DEBUG=1? That may provide better information:

python clean
DEBUG=1 python install

If you run into the error, could you try running the following GDB commands:

info registers

Thanks for helping to debug this

The gdb logs:

Hope this helps


With the last pytorch-nightly update the errors are gone.

(fastai) marco@phenom:~/MachineLearning$ python
Python 3.7.0 (default, Jun 28 2018, 13:15:42) 
[GCC 7.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
>>> torch.cuda.is_available()
>>> torch.tensor([1.,2.]).cuda()
tensor([1., 2.], device='cuda:0')

Thanks for your help.

That’s great! Thanks for folllwing up

FWIW, I had the same problem as @elmarculino but I could solve it by installing my own MAGMA library.

Installing PyTorch with DEBUG=1 and running under gdb revealed that there was a problem with a MAGMA-related function (see ). So I removed the conda package magma-cuda92, installed MAGMA 7.3.0 from source, recompiled and it worked.

I’m getting the same error. Tried to do a clean install and it’s still happening.
Python version: 3.7.0
Pytorch version: ‘1.0.0a0+4b86a21’ (built from source)

GDB output: