Is it possible to forward a batch of images, let say 64 images, through a network and backward image by image? Here is my code:
def train(epoch): global steps global s global optimizer epochLoss = 0 for index, (images, labels) in enumerate(trainLoader): if s in steps: learning_rate = learning_rate * 0.1 optimizer = optim.SGD(net.parameters(), lr=learning_rate, momentum=momentum, weight_decay=decay) if cuda: images = images.cuda() images = V(images) optimizer.zero_grad() output = net(images).cpu() # 64*95*7*7 loss = 0 for ind in range(images.size()[0]): # images.size()[0] = 64 target = V(jsonToTensor(labels[ind])) cost = criterion(output[ind,:,:,:].unsqueeze(0), target) loss += cost.data[0] cost.backward(retain_variables=True) <---- Error Occurres here! epochLoss += loss optimizer.step() print("(%d,%d) -> Current Batch Loss: %f"%(epoch, index, loss)) s = s + 1 losses.append(len(epochLoss), epochLoss)
In above code, criterion
is my customized cost function which gets two tensors as input. i have tried above code but I received an error like this:
RuntimeError: inconsistent tensor size at /py/conda-bld/pytorch_1490979338030/work/torch/lib/TH/generic/THTensorMath.c:827
could you please tell me what is the problem? How can i solve this problem?