I have a simple convolution network:
import torch.nn as nn
class model(nn.Module):
def __init__(self, ks=1):
super(model, self).__init__()
self.conv1 = nn.Conv2d(in_channels=4, out_channels=32, kernel_size=ks, stride=1)
self.fc1 = nn.Linear(8*8*32*ks, 64)
self.fc2 = nn.Linear(64, 64)
def forward(self, x):
x = F.relu(self.conv1(x))
x = x.view(x.size(0), -1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
cnn = model(1)
Since the kernel size is 1
and the output channel is 32
, I assume that there should be 32*1*1
weights in this layer. But, when I ask pytorch
about the shape of the weight matrix cnn.conv1.weight.shape
, it returns torch.Size([32, 4, 1, 1])
. Why the number of input channel should matter on the weight of a conv2d
layer?
Am I missing something?