Installation error: ‘memcpy’ was not declared in this scope

When I install the Pytorch package from source, it reports the following problem.

My platform is: Ubuntu 16.06 + CUDA 7.5

[ 1%] Linking CXX shared library libTHCUNN.so
[100%] Built target THCUNN
Install the project…
– Install configuration: “Release”
– Installing: /home/yuhang/pytorch/pytorch-master/torch/lib/tmp_install/lib/libTHCUNN.so.1
– Installing: /home/yuhang/pytorch/pytorch-master/torch/lib/tmp_install/lib/libTHCUNN.so
– Set runtime path of “/home/yuhang/pytorch/pytorch-master/torch/lib/tmp_install/lib/libTHCUNN.so.1” to “”
– Up-to-date: /home/yuhang/pytorch/pytorch-master/torch/lib/tmp_install/include/THCUNN/THCUNN.h
– Up-to-date: /home/yuhang/pytorch/pytorch-master/torch/lib/tmp_install/include/THCUNN/generic/THCUNN.h
– Configuring done
– Generating done
– Build files have been written to: /home/yuhang/pytorch/pytorch-master/torch/lib/build/nccl
[100%] Generating lib/libnccl.so
Compiling src/libwrap.cu > /home/yuhang/pytorch/pytorch-master/torch/lib/build/nccl/obj/libwrap.o
Compiling src/core.cu > /home/yuhang/pytorch/pytorch-master/torch/lib/build/nccl/obj/core.o
Compiling src/all_gather.cu > /home/yuhang/pytorch/pytorch-master/torch/lib/build/nccl/obj/all_gather.o
Compiling src/all_reduce.cu > /home/yuhang/pytorch/pytorch-master/torch/lib/build/nccl/obj/all_reduce.o
Compiling src/broadcast.cu > /home/yuhang/pytorch/pytorch-master/torch/lib/build/nccl/obj/broadcast.o
Compiling src/reduce_scatter.cu > /home/yuhang/pytorch/pytorch-master/torch/lib/build/nccl/obj/reduce_scatter.o
Compiling src/reduce.cu > /home/yuhang/pytorch/pytorch-master/torch/lib/build/nccl/obj/reduce.o
ptxas warning : Too big maxrregcount value specified 96, will be ignored
ptxas warning : Too big maxrregcount value specified 96, will be ignored
ptxas warning : Too big maxrregcount value specified 96, will be ignored
ptxas warning : Too big maxrregcount value specified 96, will be ignored
/usr/include/string.h: In function ‘void* __mempcpy_inline(void*, const void*, size_t)’:
/usr/include/string.h:652:42: error: ‘memcpy’ was not declared in this scope
** return (char ) memcpy (__dest, __src, __n) + __n;*
** ^**
/usr/include/string.h: In function ‘void __mempcpy_inline(void, const void*, size_t)’:**
/usr/include/string.h:652:42: error: ‘memcpy’ was not declared in this scope
** return (char ) memcpy (__dest, __src, __n) + __n;*


Note that in Caffe installation, similar problem can be solved in here:

https://groups.google.com/forum/#!msg/caffe-users/Tm3OsZBwN9Q/XKGRKNdmBAAJ

By changing the CMakeLists.txt, I wonder whether we have some similar solutions in Pytorch.

Thanks,
Yuhang

Further into this problem I found that it is caused by the version of gcc is too new:

Caffe:
https://github.com/BVLC/caffe/issues/4046
Torch:
https://github.com/szagoruyko/imagine-nn/issues/42

A usual way to solve it is to add a flag as: flags=-D_FORCE_INLINES before compiling. Is there any place I could insert this command installing Pytorch ?

CCFLAGS="-D_FORCE_INLINES" CXXFLAGS="-D_FORCE_INLINES" python setup.py install should do it.

Thanks for reply but the error is still there. I’m trying ‘CFLAGS’ instead of ‘CCFLAGS’.

I tried this:
CCFLAGS="-D_FORCE_INLINES" CFLAGS="-D_FORCE_INLINES" CXXFLAGS="-D_FORCE_INLINES" python setup.py install

But still not working. Might because some other problems…

Any update?

Even I’m getting the same error.

update your CUDA to 8.0

Maybe you can follow this disscussion:

In my case, I am using Ubuntu 16.04 and CUDA 7.5 and I add -D_FORCE_INLINES in CXXFLAGS in file torch/lib/nccl/Makefile.

@shiningsurya @ywu36