I’m not sure what you’re trying to do.
You want to apply a convolution with a kernel size of 7 to a 1d vector of size 13, is that correct ?
In that case your input should be a tensor with 1 channel containing a 1D vector of size 13.
So your input size should be
[7, 1, 13] (batch_size, channels, dim).
And in your model, the first conv must have an one input channel instead of 13.
Given your error, you’re giving to your model an input of size
[3082092, 13] so it seems you didn’t followed my first comment. Starting from what I wrote, we have:
features, labels = batch[:, :-1], batch[:, -1] # features size is [7, 13]
In order to obtain the needed dimension you simply need to create the channel dim:
features = features.unsqueeze(dim=1) # feature size is now [7, 1, 13]
Then you can apply your model (with the first conv corrected to have 1 input channel).
Then after this first convolution your tensor will be of shape
[7, 1024, 7] (batch_size, output_dim of the fist conv, output_size in function of padding, dilation, and stride)
As you seem to apply two convolutions with a kernel size of 1, the output dim won’t change. So at the end of your model, the size is
[7, 50, 7].
If you wanna feed that to a linear classifier, you can flatten the last two dims and feed the result to your classifier, and correct your classifier input size which should be 50 * 7.
Here is a complete example:
model = torch.nn.Sequential(
torch.nn.Conv1d(1, 1024, kernel_size=7, stride=1, padding=0, dilation=1, groups=1, bias=True),
torch.nn.Conv1d(1024, 1024, kernel_size=1, stride=1, padding=0, dilation=1, groups=1, bias=True),
torch.nn.Conv1d(1024, 50, kernel_size=1, stride=1, padding=0, dilation=1, groups=1, bias=True)
classifier = torch.nn.Linear(50 * 7, 50)
dummy_features = torch.randn(3000, 13)
dummy_labels = torch.randint(2, (3000, 1)) # integers in [0, 1]
train_data = torch.hstack((dummy_features, dummy_labels))
train_loader = torch.utils.data.DataLoader(train_data, batch_size= 7, shuffle=True)
batch = next(iter(train_loader))
features, labels = batch[:, :-1], batch[:, -1]
features = features.unsqueeze(dim=1)
outputs = model(features)
outputs = outputs.view(outputs.size(0), -1)
scores = classifier(outputs) # size (7, 50)