Properly batch 1d inputs for 1d convolution

I am having an issue doing convolution on my 2d input i am converting to 3d for 1dConv. Essentially the sample is a vector of 10 numbers, and i am reshaping the vector to -> (miniBatchSize, 1, 10) right before i feed it to the network. This works, but when the loader loads a batch size thats smaller than my batch size, the network complains in the first fully connected layer stating that the specified outfeatures dont match what was specified in the class of the network, and training errors out.

What seem like is happening is my 1D architecture seems to be dependent upon the batch size during training…I am able to change the batch size to 1, and the out features are correctly computed for a single sample. Once i raise the batch number, i get a mat mul error basically saying the first layer out features are bigger than expected. I have not had this problem with 2d conv, why am i facing this issue?

I can train the network using a batch size of 1 but its not very effective…how do i train a 1d Conv with a specified batch size, and correctly calculate the outfeatures for any batch size ? I feel like the number of outfeatures should not change from using a higher batch size? assuming each batch is processed independently but it seems like this is not the case?

Can you share your code?

class Net1(Module):

Net, using not activations on purpose

def __init__(self):

    super(Net1, self).__init__()

    self.conv = Conv1d(1,16,3,padding =1)
    self.conv2 = Conv1d(16,32,3,padding =1)
    self.pool = MaxPool1d(2)
    self.conv3 = Conv1d(32,64,3,padding =1);
    self.conv4 = Conv1d(64,64,3, padding =1);

    #self.conv5 = Conv1d(64,128,3,padding =1);
    #self.conv6 = Conv1d(128,128,3, padding =1);

    self.linear = Linear(128, 52)
    self.linear2 = Linear(52,10)
  



def forward(self, x):
    x = self.conv(x)
    x = self.conv2(x)
    x = self.pool(x)
    x = self.conv3(x)
    x = self.conv4(x)
    x = self.pool(x)
    x = x.view(1,-1)
    x = self.linear(x)
    x = self.linear2(x)
   
   

    return x

Training loop

for epoch in range(epochs): # loop over the dataset multiple times

net.train()
runningLoss = 0
for data in trainLoader:
	
	
	inputs, labels = data["begin"], data["end"]
	inputs.resize_(1,1,10)
	print(inputs)
	# zero the parameter gradients
	optimizer.zero_grad()
    	# forward + backward + optimize
	outputs = net(inputs.float().cuda())
	print(outputs)
	if epoch > 140:
		outputs = torch.round(outputs)

	loss = criterion(outputs.float() , labels.float().cuda())
	loss.backward()
	runningLoss += loss.item() 
	#print(runningLoss)
	optimizer.step()
	print("Train Loss", loss.item())
lossValues.append(runningLoss / len(trainLoader))

input is just a tensor of size [1,10]

if i increase the batch size, this will still train but the sample only inputs the first 10 numbers. If i reshape based off the batch, then the number of features in fc1 change when trying to run the network and causes error