I know it might be intuitive to others but i have a huge confusion and frustration when it comes to shaping data for convolution either 1D or 2D as the documentation makes it looks simple yet it always gives errors because of kernel size or input shape, i have been trying to understand the datashaping from the link , basically i am attempting to use Conv1D in RL. the Conv1D should accept data from 12 sensors, 25 timesteps.
The data shape is (25, 12)
I am attempting to use the below model
class DQN_Conv1d(nn.Module): def __init__(self, input_shape, n_actions): super(DQN_Conv1d, self).__init__() self.conv = nn.Sequential( nn.Conv1d(input_shape, 32, kernel_size=4, stride=4), nn.ReLU(), nn.Conv1d(32, 64, kernel_size=4, stride=2), nn.ReLU(), nn.Conv1d(64, 64, kernel_size=3, stride=1), nn.ReLU(), nn.Linear(64, 512), nn.ReLU(), nn.Linear(512, n_actions) ) def forward(self, x): return self.conv(x)
but i get error
RuntimeError: Calculated padded input size per channel: (1 x 3). Kernel size: (1 x 4). Kernel size can’t be greater than actual input size at c:\a\w\1\s\windows\pytorch\aten\src\thnn\gen
How should i properly shape the data of 12 sensors and 25 data point for a 1D Convolution in PyTorch ?
Thanks in advance