Use MKLDNN in pytorch

I checked. The Linux version comes with MKLDNN enabled :

PyTorch built with:
  - GCC 7.3
  - Intel(R) Math Kernel Library Version 2019.0.3 Product Build 20190125 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - NNPACK is enabled
  - Build settings: BLAS=MKL, BUILD_NAMEDTENSOR=OFF, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Wno-stringop-overflow, DISABLE_NUMA=1, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=0, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,

but the weird thing is, its much slower than the normal scenario!

Update :
Wow! just Wow! the MKLDNN does make a HUGE difference ! its 600x faster!
I just wrote simple benchmark with normal models such as resnet18, and the difference is day and night!

MKL time: 0.61
MKLDNN time: 0.0014
: here is the snippet :

#%%
import torch
print(*torch.__config__.show().split("\n"), sep="\n")
#%%
import time
class Timer(object):
    """A simple timer."""
    def __init__(self):
        self.total_time = 0.
        self.calls = 0
        self.start_time = 0.
        self.diff = 0.
        self.average_time = 0.

    def tic(self):
        # using time.time instead of time.clock because time time.clock
        # does not normalize for multithreading
        self.start_time = time.time()

    def toc(self, average=True):
        self.diff = time.time() - self.start_time
        self.total_time += self.diff
        self.calls += 1
        self.average_time = self.total_time / self.calls
        if average:
            return self.average_time
        else:
            return self.diff

    def clear(self):
        self.total_time = 0.
        self.calls = 0
        self.start_time = 0.
        self.diff = 0.
        self.average_time = 0.

_t = {'mkl': Timer(),
      'mkldnn': Timer()}
#%%

import torch
from torchvision import models
net = models.resnet18(False)
net.eval()
batch = torch.rand(10, 3,224,224)

_t['mkl'].tic()
for i in range(1):
    net(batch)
_t['mkl'].toc()

from torch.utils import mkldnn as mkldnn_utils
net = models.resnet18(False)
net.eval()
net = mkldnn_utils.to_mkldnn(net)
batch = torch.rand(10, 3,224,224)
batch = batch.to_mkldnn()

_t['mkldnn'].tic()
for i in range(1):
    net(batch)
_t['mkldnn'].toc()

print(f"time: {_t['mkl'].average_time}s")
print(f"time: {_t['mkldnn'].average_time}s")

The catch here is, the actual net must be benchmarked (the forward pass) and also it seems to be a repitive action so the CPU actually switches to it!