I’m running a Pytorch code for training a model. I’m not loading any specially big tensor, just use a VGG16 and an LSTM over a tensor of 1x224x224x3 and at the third iterations it launches the error. The code is:
for nb in range(size_testing_set):
vx = Variable(x_test.narrow(0, nb, 1).contiguous(),
requires_grad=True)
vryoh = Variable(yoh_test.narrow(0, nb, 1).contiguous(),
requires_grad=False)
vrx = self.rm_model(vx)
prediction = self.pm_model(vrxtrain, vytrain, weights, vrx)
loss = self.criterion(prediction, vryoh)
if agg_loss is None:
agg_loss = loss
else:
agg_loss += loss
le_max = prediction.data.max(1)
predit = le_max[1]
if predit[0] == problem.yoh_test[nb].data.max(0)[1]:
nb_ok = nb_ok + 1
rm_model is the VGG while sm_model is the LSTM. Am I missing something?
Important: should I open an issue for that? I’m not used to bug reporting.
Thank you in advanced!