I have a project that need to use tensorflow model and pytorch model at the same time. When tf model do inference, it caused
2019-01-13 13:48:27.819434: E tensorflow/stream_executor/cuda/cuda_dnn.cc:378] Loaded runtime CuDNN library: 7102 (compatibility version 7100) but source was compiled with 6021 (co mpatibility version 6000). If using a binary install, upgrade your CuDNN library to match. If building from sources, make sure the library loaded at runtime matches a compatibleversion specified during compile configuration. 2019-01-13 13:48:27.819986: F tensorflow/core/kernels/conv_ops.cc:667] Check failed: stream->parent()->GetConvolveAlgorithms(conv_parameters.ShouldIncludeWinogradNonfusedAlgo<T>() , &algorithms)
But the cudnn version on my local version is:
#define CUDNN_MAJOR 5 #define CUDNN_MINOR 1 #define CUDNN_PATCHLEVEL 10 ··· #define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL) #include "driver_types.h"
my pytorch version is 0.4.1. tensorflow version is 1.4.0
It may caused by pytorch autoload cudnn in its version.
How to avoid this problem ?