Segmentation Fault (version '0.4.0a0+5b4a438')

Hello,

It was ok when I used 0.2.0. I’ve updated to 0.4.0 yesterday.

I got a segmentation fault while training, no information is dumped. Just a simple ‘Segmentation Fault’ is all.
I’m using a remote machine through ssh. Sometimes segmentation fault only, sometimes ssh is disconnected and the process is killed. I don’t know why ssh is disconnected although)

BTW, if I don’t use torch.nn.DataParallel(), everything is ok. if I turn it on again, segfault again.

Any information that is helpful for me?

Thank you.

@thnkim can you run the training with gdb --args python [your regular python arguments] , and when it segfaults, it stops in gdb. then you can type where in the gdb console, and it will give information about the issue. Can you paste the stack trace / information that gdb gives?

1 Like

@smth Hello! Here is my backtrace (from ‘0.4.0a0+067f799’):

Thread 3308 "python" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffee5c34700 (LWP 32702)]
0x00007fffb16ba7cb in THRandom_random () from /home/polphit/anaconda3/lib/python3.6/site-packages/torch/lib/libATen.so.1
(gdb) 
(gdb) backtrace
#0  0x00007fffb16ba7cb in THRandom_random () from /home/polphit/anaconda3/lib/python3.6/site-packages/torch/lib/libATen.so.1
#1  0x00007fffb16ba81e in THRandom_random64 () from /home/polphit/anaconda3/lib/python3.6/site-packages/torch/lib/libATen.so.1
#2  0x00007fffb16ba97a in THRandom_normal () from /home/polphit/anaconda3/lib/python3.6/site-packages/torch/lib/libATen.so.1
#3  0x00007fffb133cc37 in THFloatTensor_normal () from /home/polphit/anaconda3/lib/python3.6/site-packages/torch/lib/libATen.so.1
#4  0x00007fffd1643002 in THPFloatTensor_stateless_randn (self=<optimized out>, args=<optimized out>, kwargs=<optimized out>)
    at /home/polphit/Downloads/pytorch/torch/csrc/generic/TensorMethods.cpp:59471
#5  0x00007ffff79931c9 in PyCFunction_Call (func=0x7fff9070dc60, args=0x7fff906f48b8, kwds=<optimized out>) at Objects/methodobject.c:98
#6  0x00007ffff793be96 in PyObject_Call (func=0x7fff9070dc60, args=<optimized out>, args@entry=0x7fff906f48b8, kwargs=<optimized out>, kwargs@entry=0x0)
    at Objects/abstract.c:2246
#7  0x00007fffd122a392 in THPUtils_dispatchStateless (tensor=0x11078d8, name=name@entry=0x7fffd21e6242 "randn", args=args@entry=0x7fff906f48b8, kwargs=kwargs@entry=0x0)
    at torch/csrc/utils.cpp:160
#8  0x00007fffd11f723d in dispatchStateless (args=0x7fff906f48b8, kwargs=0x0, name=0x7fffd21e6242 "randn") at torch/csrc/Module.cpp:234
#9  0x00007ffff7992df2 in _PyCFunction_FastCallDict (func_obj=0x7fffd3f9d3f0, args=0x7ffe997eb1e0, nargs=<optimized out>, kwargs=0x0) at Objects/methodobject.c:231
#10 0x00007ffff7a184ec in call_function (pp_stack=0x7ffee5c32ca8, oparg=<optimized out>, kwnames=0x0) at Python/ceval.c:4798
#11 0x00007ffff7a1b15d in _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3284
#12 0x00007ffff7a16a60 in _PyEval_EvalCodeWithName (_co=0x7fffa1958b70, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=2, 
    kwnames=0x7fff906cfe20, kwargs=0x7fff906cfe28, kwcount=4, kwstep=2, defs=0x7fffa196c1e0, defcount=2, kwdefs=0x0, closure=0x0, name=0x7fffefcba848, qualname=0x7fffa18da080)
    at Python/ceval.c:4128
#13 0x00007ffff7a16cfc in _PyFunction_FastCallDict (func=0x7fffa18db158, args=0x7ffee5c32ee0, nargs=2, kwargs=0x7fff9018f870) at Python/ceval.c:5031
#14 0x00007ffff793bba6 in _PyObject_FastCallDict (func=0x7fffa18db158, args=0x7ffee5c32ee0, nargs=<optimized out>, kwargs=0x7fff9018f870) at Objects/abstract.c:2295
#15 0x00007ffff793bdfc in _PyObject_Call_Prepend (func=0x7fffa18db158, obj=0x7fff902c0748, args=0x7fff902c0c88, kwargs=0x7fff9018f870) at Objects/abstract.c:2358
#16 0x00007ffff793be96 in PyObject_Call (func=0x7fff90856c08, args=<optimized out>, kwargs=<optimized out>) at Objects/abstract.c:2246
#17 0x00007ffff7a1c236 in do_call_core (kwdict=0x7fff9018f870, callargs=<optimized out>, func=0x7fff90856c08) at Python/ceval.c:5067
#18 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3366
#19 0x00007ffff7a16a60 in _PyEval_EvalCodeWithName (_co=0x7fffeb41f4b0, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=2, 
    kwnames=0x7fff90887920, kwargs=0x7fff90887928, kwcount=4, kwstep=2, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x7ffff7f9b170, qualname=0x7fffefbc6870)
    at Python/ceval.c:4128
#20 0x00007ffff7a16cfc in _PyFunction_FastCallDict (func=0x7fffe5c9e378, args=0x7ffee5c332d0, nargs=2, kwargs=0x7fff90560cf0) at Python/ceval.c:5031
#21 0x00007ffff793bba6 in _PyObject_FastCallDict (func=0x7fffe5c9e378, args=0x7ffee5c332d0, nargs=<optimized out>, kwargs=0x7fff90560cf0) at Objects/abstract.c:2295
#22 0x00007ffff793bdfc in _PyObject_Call_Prepend (func=0x7fffe5c9e378, obj=0x7fff902c0748, args=0x7fff902c0438, kwargs=0x7fff90560cf0) at Objects/abstract.c:2358
#23 0x00007ffff793be96 in PyObject_Call (func=0x7fff907f4d08, args=<optimized out>, kwargs=<optimized out>) at Objects/abstract.c:2246
#24 0x00007ffff79b3baf in slot_tp_call (self=0x7fff902c0748, args=0x7fff902c0438, kwds=0x7fff90560cf0) at Objects/typeobject.c:6167
#25 0x00007ffff793be96 in PyObject_Call (func=0x7fff902c0748, args=<optimized out>, kwargs=<optimized out>) at Objects/abstract.c:2246
#26 0x00007ffff7a1c236 in do_call_core (kwdict=0x7fff90560cf0, callargs=<optimized out>, func=0x7fff902c0748) at Python/ceval.c:5067
#27 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3366
#28 0x00007ffff7a16a60 in _PyEval_EvalCodeWithName (_co=0x7fffe5c7b150, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=7, 
    kwnames=0x7ffff7f98060, kwargs=0x7ffff7f98068, kwcount=0, kwstep=2, defs=0x7fffe5c7a878, defcount=1, kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0) at Python/ceval.c:4128
#29 0x00007ffff7a16ee3 in PyEval_EvalCodeEx (_co=<optimized out>, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=<optimized out>, 
    kws=<optimized out>, kwcount=0, defs=0x7fffe5c7a878, defcount=1, kwdefs=0x0, closure=0x0) at Python/ceval.c:4149
#30 0x00007ffff796eee1 in function_call (func=0x7fff901c1ae8, arg=0x7fff906dca08, kw=0x7fff903409d8) at Objects/funcobject.c:604
#31 0x00007ffff793be96 in PyObject_Call (func=0x7fff901c1ae8, args=<optimized out>, kwargs=<optimized out>) at Objects/abstract.c:2246
#32 0x00007ffff7a1c236 in do_call_core (kwdict=0x7fff903409d8, callargs=<optimized out>, func=0x7fff901c1ae8) at Python/ceval.c:5067
#33 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3366
#34 0x00007ffff7a15e74 in _PyFunction_FastCall (co=<optimized out>, args=<optimized out>, nargs=1, globals=<optimized out>) at Python/ceval.c:4880
#35 0x00007ffff7a185e8 in fast_function (kwnames=0x0, nargs=1, stack=<optimized out>, func=0x7fffd6d5bb70) at Python/ceval.c:4915
#36 call_function (pp_stack=0x7ffee5c33a68, oparg=<optimized out>, kwnames=0x0) at Python/ceval.c:4819
---Type <return> to continue, or q <return> to quit--- 
#37 0x00007ffff7a1b15d in _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3284
#38 0x00007ffff7a15e74 in _PyFunction_FastCall (co=<optimized out>, args=<optimized out>, nargs=1, globals=<optimized out>) at Python/ceval.c:4880
#39 0x00007ffff7a185e8 in fast_function (kwnames=0x0, nargs=1, stack=<optimized out>, func=0x7fffd6d5bd90) at Python/ceval.c:4915
#40 call_function (pp_stack=0x7ffee5c33c98, oparg=<optimized out>, kwnames=0x0) at Python/ceval.c:4819
#41 0x00007ffff7a1b15d in _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3284
#42 0x00007ffff7a15e74 in _PyFunction_FastCall (co=<optimized out>, args=<optimized out>, nargs=1, globals=<optimized out>) at Python/ceval.c:4880
#43 0x00007ffff7a16e75 in _PyFunction_FastCallDict (func=0x7fffd6d5bbf8, args=0x7ffee5c33e60, nargs=1, kwargs=0x0) at Python/ceval.c:4982
#44 0x00007ffff793bba6 in _PyObject_FastCallDict (func=0x7fffd6d5bbf8, args=0x7ffee5c33e60, nargs=<optimized out>, kwargs=0x0) at Objects/abstract.c:2295
#45 0x00007ffff793bdfc in _PyObject_Call_Prepend (func=0x7fffd6d5bbf8, obj=0x7fff90336860, args=0x7ffff7f98048, kwargs=0x0) at Objects/abstract.c:2358
#46 0x00007ffff793be96 in PyObject_Call (func=0x7fff9080cbc8, args=<optimized out>, kwargs=<optimized out>) at Objects/abstract.c:2246
#47 0x00007ffff7a68ae2 in t_bootstrap (boot_raw=0x7fff90771aa8) at ./Modules/_threadmodule.c:998
#48 0x00007ffff76ba6ba in start_thread (arg=0x7ffee5c34700) at pthread_create.c:333
#49 0x00007ffff6ad83dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109

are you using pytorch in a multi-threaded setting? maybe python threads?

The CPU random number generator is not thread-safe yet.

Well, I set num_workers=8 for DataLoader, which uses my custom Dataset. The Dataset uses random functions from random module (not numpy.random).
Then, in this case, should I set num_workers=1?
Was I lucky that I could run the same code on 0.2.0? :frowning:
Thank you!

DataLoader uses multiprocessing (not multithreading) so it should be fine. Hmm, weird. Are you sampling random numbers in double precision? I see that you are using random64.

Yes, very weird. The call stack seems to say somewhere at random of torch.

z = Variable(torch.randn(4, 1, 128, 128).cuda())

is the only torch.randn code that I use. Oh… I call this function not in thread or process, just at main function.

for now you can do:

z = Variable(torch.CudaTensor(4, 1, 128, 128).normal_())

I will open a thread about making the RNG thread-safe and get this fixed (not exactly sure why it’s happening).

2 Likes

I also encounter the same problem, exactly as what you described. It is very strange the the program some times kill the network. Another strange thing is that the program runs longer in old TITAN X and shorter in 1080Ti. But both will end with segmentation fault(core dumped).

Would it be solved if I downgrade the pytorch to 0.2.0?

In my case, the same code worked with 0.2.0, but segfault with 0.4.
I changed my random number generation as smth guided, it works properly now :slight_smile:
I think you can try torch’s normal_() (or other random number functions), since 0.4 has advantages.
Thank you.

remove the torch.manual_seed() , solve the problem.

1 Like