I’m doing inference on a faster r-cnn model and have the following set up where I apply non-maximum suppression on some outputs (batch size is always 1):
model_test.eval()
with torch.no_grad():
for images, targets in data_loader:
images = list(img.to(device) for img in images)
outputs = model_test(images)
outputs = [{k: v.to(cpu_device) for k, v in t.items()} for t in outputs]
predictions = apply_nms(outputs[0], iou_thresh=0.1)
However, I just wanted to be sure what this line of code is doing and why it is necessary during evaluation:
outputs = [{k: v.to(cpu_device) for k, v in t.items()} for t in outputs]
Can anyone provide me with a good explanation?
And finally is it ok for me to apply non-maximum suppression like how I have defined?