Hi, I’m implementing CNN for MNIST.
I make model like this,
class CNN(nn.Module):
def __init__(self):
super(CNN,self).__init__()
self.layer = nn.Sequential(
nn.Conv2d(1,16,5,bias=False),
nn.ReLU(),
nn.Conv2d(16,32,5,bias=False),
nn.ReLU(),
nn.MaxPool2d(2,2),
nn.Conv2d(32,64,5,bias=False),
nn.ReLU(),
nn.MaxPool2d(2,2),
)
self.fc_layer = nn.Sequential(
nn.Linear(64*3*3,100,bias=False),
nn.ReLU(),
nn.Linear(100,10,bias=False),
)
def forward(self,x):
out = self.layer(x)
out = out.view(out.size(0),-1)
out = self.fc_layer(out)
return out
BUT when I print parameter of fc_layer, bias parameters are still alive…
for child in net.children():
for param in child.fc_layer[0].parameters():
print(param)
Results is
Parameter containing:
-2.4335e-02 -2.6582e-02 -2.6456e-02 … 2.8038e-02 2.8356e-02 2.3965e-02
5.8043e-02 5.2095e-02 -1.4727e-02 … -1.2279e-02 4.1066e-02 -1.4745e-02
2.5018e-02 -3.0452e-02 -2.2104e-02 … 5.4361e-02 3.4657e-02 7.3185e-03
… ⋱ …
-1.9066e-02 -8.5296e-03 -3.2538e-02 … -5.0953e-03 2.4064e-02 -2.7636e-02
2.6700e-03 -1.1858e-02 6.5135e-03 … 1.6266e-02 2.5442e-02 3.6148e-03
-2.8540e-02 2.7211e-02 -2.3674e-02 … 4.1595e-03 -8.0679e-03 2.4592e-02
[torch.cuda.FloatTensor of size 100x576 (GPU 0)]
Parameter containing:
1.00000e-02 *
0.8742
-0.9784
1.9183
3.8764
.
.
.
-0.9190
-0.9720
2.3596
-3.0293
[torch.cuda.FloatTensor of size 100 (GPU 0)]
I DON’T want bias. How can I remove bias?