Use MKLDNN in pytorch

What would it mean if I see a slowdown with the MKLDNN import? (as follows)

$ python3 -m timeit --setup=“import torch; net = torch.nn.Linear(1000, 2); batch = torch.rand(16, 1000)” “net(batch)”
10000 loops, best of 3: 26 usec per loop

vs.

$ python3 -m timeit --setup=“import torch; from torch.utils import mkldnn as mkldnn_utils; net = torch.nn.Linear(1000, 2); net = mkldnn_utils.to_mkldnn(net); batch = torch.rand(16, 1000); batch = batch.to_mkldnn()” “net(batch)”
10000 loops, best of 3: 60.2 usec per loop

My flags are:

PyTorch built with:

  • GCC 7.3
  • Intel® Math Kernel Library Version 2019.0.4 Product Build 20190411 for Intel® 64 architecture applications
  • Intel® MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • NNPACK is enabled
  • Build settings: BLAS=MKL, BUILD_NAMEDTENSOR=OFF, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Wno-stringop-overflow, DISABLE_NUMA=1, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,

I’m also awaiting mainstream support of Intel GPU support. Do you know of any tracking issue for that? And by DNNL are you referring to this MKL’s rebranding: https://github.com/oneapi-src/oneDNN?

Yes oneDNN is the former DNNL.
The DNLL1.2 will be available starting from Pytorch v1.6 (its enabled by default in the nightly builds)
As far as I know, There has not been any work on the GPU support so far. you can always check out :


or create a new issue to ask how things are going
1 Like

hi,i want to know why Pytorch’s inference speed is such different in Windows and Linux for cpu.