I am trying to recreate a keras model in PyTorch. My input tensor has the shape (2, 10, 25). In keras, I apply the following Conv2d layer to my input tensor:
import tensorflow as tf
batch_size = 1
x = tf.random.normal( (batch_size, 2, 10, 25) )
y = tf.keras.layers.Conv2D(filters=34,
kernel_size=(1,10),
padding='valid',
input_shape=input_shape[2:])(x)
print(y.shape)
# (1, 2, 1, 34)
When trying to recreate this layer in PyTorch, I am not quite grasping the dimensionalities to use, since the Conv2d layer in PyTorch has no filters parameter. This is how far I’ve gotten on my own:
import torch
import torch.nn as nn
batch_size = 1
x = torch.randn(batch_size, 2, 10, 25)
conv = nn.Conv2d(in_channels=2, out_channels=2, kernel_size=(10,1), bias=True)
y = conv(input_data)
print(y.shape)
# torch.Size([1, 2, 1, 25])
Now, all dimensions except for the last one are identical to the keras implementation. I suspect my understanding of the out_channels parameter is not correct but I am not sure how to proceed from here.
Any help is hugely appreciated!