Pooling in Loss Function: How to Correct Bug

Hello PyTorch community,
Loss function question:

I have a loss function which I intend to have compare (via MSE etc) an original image vs a modified image at different resolutions. I attempted to implement the comparison of different resolutions by use of max/avg pool layers with strides > 1.

Pseudocode:
def loss(image, target_image):
reduced_image = pool(image)
reduced_target = pool(target_image)

loss = mse_loss(image, target)
loss += mse_loss(reduced_image, reduced_target)
return loss

However, I am getting an error:

“…/python3.6/site-packages/torch/tensor.py”, line 376, in __array
return self.cpu().numpy()
RuntimeError: Can’t call numpy() on Variable that requires grad. Use var.detach().numpy() instead. "

Can someone suggest to me 1) what the source of the error is, 2) whether my approach is reasonable, and 3) what I would need to change to get it working?

Issue resolved. The loss function and pooling was a red herring. The issue was multiplication of a loss object by a float.