Hi bro. Thanks for replying. I made some changes in the code. Basically I unsqueezed the output got from the last fully connected layer to get the output in shape of(T,N,C) so that it can get feeded to CTC loss function. There is no error now but the training is not producing any output. Please can you help me in telling where Im going wrong. Im attaching the updated code:
MODEL NETWORK
import torch
import torch.nn as nn
from torch.nn.modules import dropout
from torch.nn.modules.dropout import Dropout, Dropout2d
train_on_gpu = torch.cuda.is_available()
class CRNN(nn.Module):
def __init__(self, nclass=76, imgH=32, nc=1, nh=512, n_rnn=2, leakyRelu=False):
super(CRNN, self).__init__()
assert imgH % 16 == 0, 'imgH has to be multiple of 16'
ks = [3, 3, 3, 3, 3, 3, 2]
ps = [1, 1, 1, 1, 1, 1, 0]
ss = [1, 1, 1, 1, 1, 1, 1]
nm = [32, 64, 128, 256, 256, 512, 512]
cnn = nn.Sequential()
def convRelu(i, batchNormalization=False, leakyRelu=False, relu=False):
nIn = nc if i == 0 else nm[i - 1]
nOut = nm[i]
cnn.add_module('Conv{0}'.format(i),
nn.Conv2d(nIn, nOut, ks[i], ss[i], ps[i]))
if batchNormalization:
cnn.add_module('BatchNormal{0}'.format(
i), nn.BatchNorm2d(nOut))
if leakyRelu:
cnn.add_module('ReLU{}'.format(i),
nn.LeakyReLU(0.2, inplace=True))
if relu:
cnn.add_module('ReLU{}'.format(i), nn.ReLU(True))
convRelu(0, leakyRelu=True, batchNormalization=True)
cnn.add_module('Pooling{}'.format(0), nn.MaxPool2d(2, 2)) # 32x50x16
# cnn.add_module('dropout{}'.format(0), Dropout2d(p=0.3))
convRelu(1, leakyRelu=True, batchNormalization=True)
cnn.add_module('Pooling{}'.format(1), nn.MaxPool2d(2, 2)) # 64x25x8
# cnn.add_module('dropout{}'.format(1), Dropout2d(p=0.2))
convRelu(2, batchNormalization=True, relu=True)
# cnn.add_module('dropout{}'.format(2), Dropout2d(p=0.2))
convRelu(3, batchNormalization=True, relu=True)
cnn.add_module('Pooling{}'.format(2),
nn.MaxPool2d((1, 2))) # 256x25x4
# cnn.add_module('dropout{}'.format(3), Dropout2d(p=0.2))
convRelu(4, batchNormalization=True, relu=True)
# cnn.add_module('dropout{}'.format(4), Dropout2d(p=0.2))
convRelu(5, batchNormalization=True, relu=True)
cnn.add_module('Pooling{}'.format(3),
nn.MaxPool2d((1, 2))) # 512x25x2
# cnn.add_module('dropout{}'.format(5), Dropout2d(p=0.2))
convRelu(6, batchNormalization=True, relu=True) # 512x24x1
# cnn.add_module('dropout{}'.format(6), Dropout2d(p=0.2))
self.cnn = cnn
self.fc = nn.Sequential(nn.Linear(512*35, 1000), nn.ReLU(), nn.Dropout(0.3), nn.Linear(
1000, 500), nn.ReLU(), nn.Dropout(0.2), nn.Linear(500, nclass))
def forward(self, input):
conv = self.cnn(input)
b, c, h, w = conv.size()
print(conv.size())
#conv = conv.view(b, c, h*w)
# conv = conv.permute(2, 0, 1) # [w, b, c]
conv = conv.view(b, -1)
# print(output.size())
output = self.fc(conv)
output = output.unsqueeze(0)
m = nn.LogSoftmax(2)
output = m(output)
# print(output.size())
return output
model = CRNN()
print(model)
output is like as pasted below:
Epoch: 1, Training Loss: nan, Validation Loss: inf, Train Accuracy: 0.0, Validation Acuuracy: 0.0 CER: 99.96780712484833
Validation loss decreased (inf → inf). Saving model …
Epoch: 2, Training Loss: nan, Validation Loss: inf, Train Accuracy: 0.0, Validation Acuuracy: 0.0 CER: 100.0
Validation loss decreased (inf → inf). Saving model …
Epoch: 3, Training Loss: nan, Validation Loss: inf, Train Accuracy: 0.0, Validation Acuuracy: 0.0 CER: 100.0
Validation loss decreased (inf → inf). Saving model …
Epoch: 4, Training Loss: nan, Validation Loss: inf, Train Accuracy: 0.0, Validation Acuuracy: 0.0 CER: 100.0
Validation loss decreased (inf → inf). Saving model …
Epoch: 5, Training Loss: nan, Validation Loss: inf, Train Accuracy: 0.0, Validation Acuuracy: 0.0 CER: 100.0
Validation loss decreased (inf → inf). Saving model …
Epoch: 1, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.98144259535059
Validation loss decreased (inf → inf). Saving model …
Epoch: 2, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 3, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 4, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 5, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 6, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 7, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 8, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 9, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 10, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 11, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 12, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 13, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 14, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 15, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 16, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 17, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 18, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 19, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 20, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 21, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 22, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 23, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 24, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 25, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 26, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 27, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 28, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 29, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 30, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 31, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 32, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 33, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 34, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …
Epoch: 35, Training Loss: nan, Validation Loss: inf, Train Accuracy: 8.355614973262032e-05, Validation Acuuracy: 0.0 CER: 99.99164438502673
Validation loss decreased (inf → inf). Saving model …