Use tensorflow and pytorch in same code caused error

I have a project that need to use tensorflow model and pytorch model at the same time. When tf model do inference, it caused

2019-01-13 13:48:27.819434: E tensorflow/stream_executor/cuda/cuda_dnn.cc:378] Loaded runtime CuDNN library: 7102 (compatibility version 7100) but source was compiled with 6021 (co mpatibility version 6000).  If using a binary install, upgrade your CuDNN library to match.  If building from sources, make sure the library loaded at runtime matches a compatibleversion specified during compile configuration.
2019-01-13 13:48:27.819986: F tensorflow/core/kernels/conv_ops.cc:667] Check failed: stream->parent()->GetConvolveAlgorithms(conv_parameters.ShouldIncludeWinogradNonfusedAlgo<T>() , &algorithms)

But the cudnn version on my local version is:

#define CUDNN_MAJOR      5
#define CUDNN_MINOR      1
#define CUDNN_PATCHLEVEL 10
ยทยทยท
#define CUDNN_VERSION    (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)

#include "driver_types.h"

my pytorch version is 0.4.1. tensorflow version is 1.4.0
It may caused by pytorch autoload cudnn in its version.
How to avoid this problem ?

Hi,

Yes pytorch ships with the latest cudnn binary for best performances.
You will need to get both installs using the same version. Either by finding a version of Tensorflow that works with the latest cudnn or by compiling one of the two framework from source using the cudnn version expected by the other one such that they match.

Thanks for your reply. I just found that if import tensorflow first and import torch later. It seems to be worked.:joy: