Lift the test image size limit

The code as :
def validate(val_loader, net, criterion, optimizer, curr_epoch, writer):
‘’’
Runs the validation loop after each training epoch
val_loader: Data loader for validation
net: thet network
criterion: loss fn
optimizer: optimizer
curr_epoch: current epoch
writer: tensorboard writer
return:
‘’’
net.eval()
val_loss = AverageMeter()
mf_score = AverageMeter()
IOU_acc = 0
dump_images = []
heatmap_images = []
for vi, data in enumerate(val_loader):
input, mask, edge, img_names = data
assert len(input.size()) == 4 and len(mask.size()) == 3
assert input.size()[2:] == mask.size()[1:]
h, w = mask.size()[1:]

    batch_pixel_size = input.size(0) * input.size(2) * input.size(3)
    input, mask_cuda, edge_cuda = input.cuda(), mask.cuda(), edge.cuda()

    with torch.no_grad():
        seg_out, edge_out = net(input)    # output = (1, 19, 713, 713)

    if args.joint_edgeseg_loss:
        loss_dict = criterion((seg_out, edge_out), (mask_cuda, edge_cuda))
        val_loss.update(sum(loss_dict.values()).item(), batch_pixel_size)
    else:
        val_loss.update(criterion(seg_out, mask_cuda).item(), batch_pixel_size)

    # Collect data from different GPU to a single GPU since
    # encoding.parallel.criterionparallel function calculates distributed loss
    # functions

    seg_predictions = seg_out.data.max(1)[1].cpu()
    edge_predictions = edge_out.max(1)[0].cpu()

    #Logging
    if vi % 20 == 0:
        if args.local_rank == 0:
            logging.info('validating: %d / %d' % (vi + 1, len(val_loader)))
    if vi > 10 and args.test_mode:
        break
    _edge = edge.max(1)[0]

    #Image Dumps
    if vi < 10:
        dump_images.append([mask, seg_predictions, img_names])
        heatmap_images.append([_edge, edge_predictions, img_names])

    IOU_acc += fast_hist(seg_predictions.numpy().flatten(), mask.numpy().flatten(),
                               args.dataset_cls.num_classes)

    del seg_out, edge_out, vi, data

if args.local_rank == 0:
    evaluate_eval(args, net, optimizer, val_loss, mf_score, IOU_acc, dump_images, heatmap_images,
            writer, curr_epoch, args.dataset_cls)

return val_loss.avg, mf_score.avg

Large images cannot be tested, and the test image and the data set verification set share the same folder. Each time you need to test the image after training, you must cut the large image and provide the corresponding label. How to improve the test program so that you can enter a large image at will Can test results be produced?

If you are running out of memory during evaluation, you could wrap the evaluation code block into a with torch.no_grad() block, which would avoid storing the unnecessary intermediate tensors, which would be needed to calculate the gradients.

If that doesn’t help you could lower the batch size (if it’s not already 1) use model sharding (if more than a single GPU is available) or move the evaluation to the CPU.