PyTorch vs Tensorflow gives different result in Conv2d

Hi,

I found this issue, when trying to load trained PyTorch model’s weights into Tensorflow Keras model.
Following is the code to investigate this issue:

import numpy as np
import torch
import tensorflow as tf

tf.enable_eager_execution()

th_x = torch.randn(1, 3, 8, 8)
tf_x = np.float32(th_x.numpy())

s = (1, 1) # strides
th_conv = torch.nn.Conv2d(3, 1, 3, s, 1, bias=False)
tf_conv = tf.keras.layers.Convolution2D(1, 3, s, 'same', data_format='channels_first', use_bias=False)
tf_conv(tf_x)
tf_conv.set_weights([th_conv.weight.data.permute(2, 3, 1, 0).detach().numpy()])

th_y = th_conv(th_x).detach().numpy()
tf_y = tf_conv(tf_x).numpy()

print(th_y.shape, th_y.ravel()[:10])
print(tf_y.shape, tf_y.ravel()[:10])
print('Error Rate:', np.mean(np.abs(th_y - tf_y) > 1e-6))

The code above produces same results for PyTorch’s Conv2d and Tensorflow’s Convolution2D operations. However, when I set the strides to (2, 2), it gives totally different results.

I’m looking forward to hear any solution to this issue, thanks in advance.

I’m using tensorflow==1.14.0 and torch==1.1.0.

Most likely it is related to different ‘same’ padding behavior in TF and Pytorch. It is symmetric in Pytorch and asymmetric in TF. I also faced this issue and solved it by adding manual padding in symmetric manner before convolution using tf.pad. In conv layer padding should be ‘valid’ after that.