# How to use numpy and opencv without breaking computation graph?

I am working on a custom loss function, as the output of network I am getting x and y coordinates of the image. I am trying to put a constraint that predicted coordinates and ground truth should be close to each other in the 2D domain. But generating an image from coordinates requires me to work in NumPy and OpenCV. This break the computation graph, can somebody help me to get through this:

vis_loss1 is my main function which returns loss and visualize is the function which convert coordinates to an image(this is where I use numpy and openCV). using l1 I want to adjust gt and pred , but this is not happening. Can someone help me?

``````gt_coord = torch.empty(34,requires_grad=False)
gt_coord = gt[i,:,j]
``````

Here you first create a tensor that doesnt require gradients, and then you overwrite it with a tensor that does require gradient. I think just writing:

``````gt_coord = gt[i,:,j].detach().numpy()
``````

Thankyou for your reply. I will try and comback in case of issue.

I did as you suggested but the network is still not learning, loss is still constant. I am trying to make pred as close as possible to gt , by constraining them in 2D for that I need numpy and opencv, therefore I need a custom loss function. But my loss is not decreasing , its a constant throughout learning.

Looking at this a bit more your code seems problematic. You can’t just detach and then reattach tensors to the computational graph, sorry that I didn’t really read carefully before. Can you explain why you don’t just use (gt_coord-pred_coord)**2 ?
Is it because you somehow don’t care about the ordering of the coordinates, just that some of them overlap?

cv2.circle is not differentiable in pytorch, so I think it’s not possible to use it, however there might be some easy way to implement it in pytorch instead.

I want to constraint them in 2 dimension making prediction image as close as possible to ground truth image. That’s why I am not using simple MSE loss , I want to do it in 2 dimension.

But x,y are already 2d coordinates, why not just use the coordinate-wise distance, i.e. (x_gt-x_pred,y_gt-y_pred)**2 as the loss function? I understand that you want the circles to overlap, but this loss function would (from what I can tell), minimize what you want.

Ohh now I realized that even I project them in 2D, most of the values will be still zero, therefore it doesn’t matter I do it in 2D or 1D.