No kernel image is available for execution on the device pytorch quadro "k4200"

Hi,
I am trying to run mtcnn from pytorch for a computer vision project, I have quadro 4200 gpu and I am using “nvidia/cuda:11.4.0-runtime-ubuntu20.04” docker image. When I am trying to run python file I am getting “No kernel image is available for execution on the device” error.

As per the "Building PyTorch from source to support a 3.0 CC device (Quadro K4200) " issue I am trying to build the pytorch from the source by following the “Build pytorch from source | Beenfrog's research blog”. but getting following error.

/usr/bin/ld: cannot find -lcudart_static
collect2: error: ld returned 1 exit status
make[4]: *** [Makefile:74: /pytorch/build/nccl/lib/libnccl.so.2.14.3] Error 1
make[4]: *** Waiting for unfinished jobs…
make[3]: *** [Makefile:25: src.build] Error 2
make[2]: *** [CMakeFiles/nccl_external.dir/build.make:86: nccl_external-prefix/src/nccl_external-stamp/nccl_external-build] Error 2
make[1]: *** [CMakeFiles/Makefile2:2045: CMakeFiles/nccl_external.dir/all] Error 2
make: *** [Makefile:146: all] Error 2

The build fails during the linking stage while building NCCL, but also note that CUDA 11.4 supports compute capability 3.5-8.6 and would most likely break while trying to compile CUDA kernels for your Kepler GPU with compute capability 3.0.
The last CUDA toolkit supporting compute capability 3.0 was CUDA 10.2, which is also already deprecated for source builds (in the current master).

So, it will not work for my gpu, right?
If yes what is the alternate method?

Yes, a source build in your current setup will most likely not work for your GPU and you would need to use an older PyTorch release (I believe 1.12.x was still supporting CUDA 10.2) as well as an older CUDA toolkit (<=10.2 as mentioned).

Hi @ptrblck
As you mentioned I have taken the “nvidia/cuda:10.2-cudnn8-runtime-ubuntu18.04” docker images and tried to install the torch==1.12.x but ubuntu18.04 doesn’t have torch 1.12.x. Then I have tried with 1.10.2 but still getting the same error.

So, can I come to the conclusion that I cant achieve this by the quadro 4200, right?

I don’t fully understand this statement since PyTorch is not an Ubuntu package in case you are trying to install it via apt install.
You would need to build PyTorch from source as described here.

I am installing pytorch via pip, then it showing 1.12.x version is not available. And In the given link they mentioned cuda 11.0 or higher but mine is 10.2, will it work?

The 1.12.1+cu102 pip wheel can be found here and you could change the cp38 tag to the Python version you want to use.
However, note that also these wheels will not work since the binaries were only supporting compute capability 3.5-7.5 with the CUDA10.2 runtime, and you would still need to build PyTorch from source for your GPU.

Ok Mr.@ptrblck thank you for the quick responses. I will try to build from the source.

I have followed the instruction from the documentation, I got this error.

I’m not familiar with developing on Windows but based on the screenshot it seems the compiler (in particular cc1plus) ran into an internal error. Maybe updating the C++ toolchain could help.

I am running it in ubuntu only not windows.

In that case I don’t understand why your build environment points to Program Files(x86)\...\cl.exe which is a Windows executable.
Are you mixing Windows build commands into your Ubuntu setup?

I was trying to run the following commands.

After you mentioned I have followed these below steps
1.created an environment with python 3.7.16 using anaconda
2.add-apt-repository ppa:ubuntu-toolchain-r/test
3.apt-get update
4.apt install gcc-10 gcc-10-base gcc-10-doc g+±10
5.apt install libstdc+±10-dev libstdc+±10-doc
6.conda install astunparse numpy ninja pyyaml setuptools cmake typing_extensions six requests dataclasses
7.conda install mkl mkl-include
8.conda install -c pytorch magma-cuda102
9.git clone --recursive GitHub - pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration
10.cd pytorch
11.git submodule sync
12.git submodule update --init --recursive
13.export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-“$(dirname $(which conda))/…/”}
14python setup.py develop

and got this error.

I don’t see the actual error just the failing message, so you would need to check which part of the build failed.
PS: you can post code snippets by wrapping them into thee backticks ```, which makes debugging easier.

Can you please tell me in simple words.

Your screenshot does not show the error, so scroll up and post the first line showing an “Error” output.

Hi mr @ptrblck,
I cannot find the error message, so I am adding the entire result.

Building wheel torch-2.0.0a0+gitd322f82
– Building version 2.0.0a0+gitd322f82
cmake --build . --target install --config Release
[459/874] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp.AVX2.cpp.o
In file included from /usr/include/c++/7/tuple:39:0,
from /usr/include/c++/7/functional:54,
from /pytorch/c10/core/DeviceType.h:10,
from /pytorch/c10/core/Device.h:3,
from /pytorch/build/aten/src/ATen/core/TensorBody.h:11,
from /pytorch/aten/src/ATen/core/Tensor.h:3,
from /pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:2,
from /pytorch/build/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp.AVX2.cpp:1:
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp: In instantiation of ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()>::<lambda(int64_t, int64_t)> [with bool ReLUFused = false; int64_t = long int]’:
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()> [with bool ReLUFused = false]::<lambda(int64_t, int64_t)>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()> [with bool ReLUFused = false]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()> [with bool ReLUFused = false]::<lambda()>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()> [with bool ReLUFused = false]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t) [with bool ReLUFused = false; at::MaterializedITensorListRef = std::vector<std::reference_wrapper, std::allocator<std::reference_wrapper > >; int64_t = long int]::<lambda()>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::Tensor at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t) [with bool ReLUFused = false; at::MaterializedITensorListRef = std::vector<std::reference_wrapper, std::allocator<std::reference_wrapper > >; int64_t = long int]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:4229:1: required from here
/usr/include/c++/7/array:94:12: note: ‘struct std::array<signed char, 32>’ has no user-provided default constructor
struct array
^~~~~
/usr/include/c++/7/array:110:56: note: and the implicitly-defined constructor does not initialize ‘signed char std::array<signed char, 32>::_M_elems [32]’
typename _AT_Type::_Type _M_elems;
^~~~~~~~
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp: In instantiation of ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()>::<lambda(int64_t, int64_t)> [with bool ReLUFused = false; int64_t = long int]’:
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()> [with bool ReLUFused = false]::<lambda(int64_t, int64_t)>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()> [with bool ReLUFused = false]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()> [with bool ReLUFused = false]::<lambda()>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()> [with bool ReLUFused = false]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t) [with bool ReLUFused = false; at::MaterializedITensorListRef = std::vector<std::reference_wrapper, std::allocator<std::reference_wrapper > >; int64_t = long int]::<lambda()>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::Tensor at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t) [with bool ReLUFused = false; at::MaterializedITensorListRef = std::vector<std::reference_wrapper, std::allocator<std::reference_wrapper > >; int64_t = long int]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:4229:1: required from here
/usr/include/c++/7/array:94:12: note: ‘struct std::array<unsigned char, 32>’ has no user-provided default constructor
struct array
^~~~~
/usr/include/c++/7/array:110:56: note: and the implicitly-defined constructor does not initialize ‘unsigned char std::array<unsigned char, 32>::_M_elems [32]’
typename _AT_Type::_Type _M_elems;
^~~~~~~~
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp: In instantiation of ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()>::<lambda(int64_t, int64_t)> [with bool ReLUFused = false; int64_t = long int]’:
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()> [with bool ReLUFused = false]::<lambda(int64_t, int64_t)>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()> [with bool ReLUFused = false]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()> [with bool ReLUFused = false]::<lambda()>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()> [with bool ReLUFused = false]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t) [with bool ReLUFused = false; at::MaterializedITensorListRef = std::vector<std::reference_wrapper, std::allocator<std::reference_wrapper > >; int64_t = long int]::<lambda()>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::Tensor at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t) [with bool ReLUFused = false; at::MaterializedITensorListRef = std::vector<std::reference_wrapper, std::allocator<std::reference_wrapper > >; int64_t = long int]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:4229:1: required from here
/usr/include/c++/7/array:94:12: note: ‘struct std::array<int, 8>’ has no user-provided default constructor
struct array
^~~~~
/usr/include/c++/7/array:110:56: note: and the implicitly-defined constructor does not initialize ‘int std::array<int, 8>::_M_elems [8]’
typename _AT_Type::_Type _M_elems;
^~~~~~~~
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp: In instantiation of ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()>::<lambda(int64_t, int64_t)> [with bool ReLUFused = true; int64_t = long int]’:
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()> [with bool ReLUFused = true]::<lambda(int64_t, int64_t)>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()> [with bool ReLUFused = true]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()> [with bool ReLUFused = true]::<lambda()>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()> [with bool ReLUFused = true]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t) [with bool ReLUFused = true; at::MaterializedITensorListRef = std::vector<std::reference_wrapper, std::allocator<std::reference_wrapper > >; int64_t = long int]::<lambda()>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::Tensor at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t) [with bool ReLUFused = true; at::MaterializedITensorListRef = std::vector<std::reference_wrapper, std::allocator<std::reference_wrapper > >; int64_t = long int]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:4230:1: required from here
/usr/include/c++/7/array:94:12: note: ‘struct std::array<signed char, 32>’ has no user-provided default constructor
struct array
^~~~~
/usr/include/c++/7/array:110:56: note: and the implicitly-defined constructor does not initialize ‘signed char std::array<signed char, 32>::_M_elems [32]’
typename _AT_Type::_Type _M_elems;
^~~~~~~~
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp: In instantiation of ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()>::<lambda(int64_t, int64_t)> [with bool ReLUFused = true; int64_t = long int]’:
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()> [with bool ReLUFused = true]::<lambda(int64_t, int64_t)>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()> [with bool ReLUFused = true]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()> [with bool ReLUFused = true]::<lambda()>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()> [with bool ReLUFused = true]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t) [with bool ReLUFused = true; at::MaterializedITensorListRef = std::vector<std::reference_wrapper, std::allocator<std::reference_wrapper > >; int64_t = long int]::<lambda()>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::Tensor at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t) [with bool ReLUFused = true; at::MaterializedITensorListRef = std::vector<std::reference_wrapper, std::allocator<std::reference_wrapper > >; int64_t = long int]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:4230:1: required from here
/usr/include/c++/7/array:94:12: note: ‘struct std::array<unsigned char, 32>’ has no user-provided default constructor
struct array
^~~~~
/usr/include/c++/7/array:110:56: note: and the implicitly-defined constructor does not initialize ‘unsigned char std::array<unsigned char, 32>::_M_elems [32]’
typename _AT_Type::_Type _M_elems;
^~~~~~~~
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp: In instantiation of ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()>::<lambda(int64_t, int64_t)> [with bool ReLUFused = true; int64_t = long int]’:
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()> [with bool ReLUFused = true]::<lambda(int64_t, int64_t)>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()>::<lambda()> [with bool ReLUFused = true]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()> [with bool ReLUFused = true]::<lambda()>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t)::<lambda()> [with bool ReLUFused = true]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘struct at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t) [with bool ReLUFused = true; at::MaterializedITensorListRef = std::vector<std::reference_wrapper, std::allocator<std::reference_wrapper > >; int64_t = long int]::<lambda()>’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:123:3: required from ‘at::Tensor at::native::{anonymous}::qcat_nhwc_kernel(const MaterializedITensorListRef&, int64_t, double, int64_t) [with bool ReLUFused = true; at::MaterializedITensorListRef = std::vector<std::reference_wrapper, std::allocator<std::reference_wrapper > >; int64_t = long int]’
/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:4230:1: required from here
/usr/include/c++/7/array:94:12: note: ‘struct std::array<int, 8>’ has no user-provided default constructor
struct array
^~~~~
/usr/include/c++/7/array:110:56: note: and the implicitly-defined constructor does not initialize ‘int std::array<int, 8>::_M_elems [8]’
typename _AT_Type::_Type _M_elems;

                                                    ^~~~~~~~

[547/874] Building CXX object test_tensorexpr/CMakeFiles/test_tensorexpr.dir/test_ops.cpp.o
FAILED: test_tensorexpr/CMakeFiles/test_tensorexpr.dir/test_ops.cpp.o
/usr/bin/c++ -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_GTEST -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -I/pytorch/build/aten/src -I/pytorch/aten/src -I/pytorch/build -I/pytorch -I/pytorch/cmake/…/third_party/benchmark/include -I/pytorch/third_party/onnx -I/pytorch/build/third_party/onnx -I/pytorch/third_party/foxi -I/pytorch/build/third_party/foxi -I/pytorch/build/caffe2/…/aten/src -I/pytorch/torch/csrc/api -I/pytorch/torch/csrc/api/include -I/pytorch/c10/… -I/pytorch/third_party/pthreadpool/include -isystem /pytorch/build/third_party/gloo -isystem /pytorch/cmake/…/third_party/gloo -isystem /pytorch/cmake/…/third_party/googletest/googlemock/include -isystem /pytorch/cmake/…/third_party/googletest/googletest/include -isystem /pytorch/third_party/protobuf/src -isystem /root/miniconda3/envs/fr/include -isystem /pytorch/third_party/gemmlowp -isystem /pytorch/third_party/neon2sse -isystem /pytorch/third_party/XNNPACK/include -isystem /pytorch/third_party/ittapi/include -isystem /pytorch/cmake/…/third_party/eigen -isystem /pytorch/third_party/ideep/mkl-dnn/third_party/oneDNN/include -isystem /pytorch/third_party/ideep/include -isystem /pytorch/third_party/ideep/mkl-dnn/include -isystem /pytorch/third_party/googletest/googletest/include -isystem /pytorch/third_party/googletest/googletest -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIE -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD -Wno-unused-variable -pthread -std=gnu++1z -MD -MT test_tensorexpr/CMakeFiles/test_tensorexpr.dir/test_ops.cpp.o -MF test_tensorexpr/CMakeFiles/test_tensorexpr.dir/test_ops.cpp.o.d -o test_tensorexpr/CMakeFiles/test_tensorexpr.dir/test_ops.cpp.o -c /pytorch/test/cpp/tensorexpr/test_ops.cpp
c++: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-7/README.Bugs> for instructions.