Hi,
I am working with a pretrained VGG16 model to classify images. I have added my own layers after the model like this:
model = torchvision.models.vgg16(pretrained=True)
model.classifier.add_module('7', nn.Linear(in_features=1000, out_features=500,bias=True))
model.classifier.add_module('8', nn.ReLU())
model.classifier.add_module('9', nn.Linear(in_features=500, out_features=100,bias=True))
model.classifier.add_module('10', nn.ReLU())
model.classifier.add_module('11', nn.Linear(in_features=100, out_features=67,bias=True))
When I start the training, I print the output of the model and I see it’s nan
. I don’t know what going on? I’ve ensured my input doesn’t contain nan
and this is my dataloader:
class dataload(Dataset):
def __init__(self, x, transform=None):
self.data = x
def __len__(self):
return len(self.data)
def __getitem__(self, i):
img = mpimg.imread(self.data[i])/255.0
img = img.transpose((2, 0, 1))
img = torch.from_numpy(img).float()
tmp = np.int32(filenames[i].split('/')[-1].split('_')[0][1])
label = np.zeros(67)
label[tmp] = 1
label = torch.from_numpy(label).float()
# if self.transform:
# sample = self.transform(sample)
return img,label
What could be going on?