I want to convert a 4d maxpool from TensorFlow to PyTorch, but I could not find the equivalent function:
tf.nn.max_pool(first_layer
ksize=[1, 1, 5, 1],
strides=[1, 1, 5, 1],
padding=‘SAME’)
to
torch.nn.MaxPool4d(kernel_size=(1, 1, 5, 1), stride=(1, 1, 5, 1))
the first layer is a 4d tensor
But, it seems that PyTorch doesn’t have maxpool4d.
How can I do this conversion?
I’m not sure if this means your input tensor has 4 dimensions, but if so you could use nn.MaxPool2d assuming the input tensor dimensions are defined as [batch_size, channels, height, width] and specify the kernel_size as well as the stride for the spatial dimensions only (the first two are set to 1 so don’t have an effect).
But, after applying a conv2d on the output of maxpool2d, a tensor with reduced dimension(32, 5, 3000, 1)
nn.Conv2d(5, 5, kernel_size=(5, 4, 1, 4), stride=(1, 5, 1, 1), bias=True)
it returns the following error:
RuntimeError: expected padding to be a single integer value or a list of 4 values to match the convolution dimensions, but got padding=[0, 0]
2d layers are using the kernel_sizeand stride on the spatial dimensions.
Since you are again defining a 4D kernel_size, could you explain your use case a bit more and give more information about the input dimensions to these layers?
It didn’t work. this is the information about input dimensions and layers:
s = 150,000
my input is [32,5,s, 1] and I need to apply the following maxpool and conv functions:
fully connected (32, 512)
After applying the first conv2d at the second line it returns the following error:
RuntimeError: Calculated padded input size per channel: (3000 x 1). Kernel size: (5 x 4). Kernel size can’t be greater than actual input size
This layer won’t work as the kernel_size is specified as (5, 4) while the input seems to have a shape of [batch_size, channels, 3000, 1] which is too small in the width.