Here is the myNet, defined as:
class myMLP(torch.nn.Module):
def __init__(self):
super(myMLP, self).__init__()
self.model=torch.nn.Sequential(
torch.nn.Linear(784, 200),
torch.nn.Dropout(0.3), #drop 30%
torch.nn.LeakyReLU(inplace=True),
torch.nn.Linear(200, 200),
torch.nn.Dropout(0.4), #drop 40%
torch.nn.LeakyReLU(inplace=True),
torch.nn.Linear(200, 10),
)
def forward(self, x):
x=self.model(x)
return x
which is defined to take in [784] tensor and output [10] tensor.
while when I use it in the following way, and successfully:
for epoch in range(maxepoch):
myNet.train()
for batch_idx, (data, target) in enumerate(train_loader):
data=data.view(-1, 28*28)
print(data.size())
data, target=data.to(device), target.to(device)
logits=myNet(data)
print(logits.size())
loss=loss_function(logits, target)
It took in a [100, 28, 28] data each time without reporting error, and output [100, 10] logits.
How does that work? Thanks.