How to add neurons of 1 manually to each layer of the network without messing up dimension matching input and output of a layer


For some reasons I need to remove the bias in torch.nn.Linear and add it manually myself. I did something like this:

class LeNet_300_100(nn.Module):
def init(self):

    self.fc1 = nn.Linear(785, 300, bias = False) 
    self.relu1 = nn.ReLU()
    self.fc2 = nn.Linear(301, 100, bias = False) 
    self.relu2 = nn.ReLU()
    self.fc3 = nn.Linear(101, 10, bias = False) 

def forward(self, x):
    batch_size = x.size()[0]
    x =, 784), torch.ones(batch_size,1)), 1) #batch_size*785
    x = self.fc1(x)
    x = self.relu1(x)
    x =, torch.ones(batch_size,1)), 1)
    x = self.fc2(x)
    x = self.relu2(x)
    x =, torch.ones(batch_size,1)), 1)
    x = self.fc3(x)
    return F.log_softmax(x, dim=1)

However the problem is mismatch between input and output of layers. For instance layer one has [batch_size *300] input, but [batch_size *301] output. How can I avoid this. One possible way is adding a tensor of 0 weights to the first layer and freeze them to zeros. Any idea how to do that? or any other reasonable solution?