I’m converting a pytorch tensor representation of an image and series of points to numpy so that I can draw lines between the points and display the image in jupyter lab (using matplotlib)
If I comment out the cv2.polylines
line, this code works as expected, showing me the image.
# convert img tensor to np
img = img / 2 + 0.5
img = img.detach().cpu().numpy().astype(np.float32)
img = np.transpose(img, (1, 2, 0))
print(type(img))
# prints:
# <class 'numpy.ndarray'>
print(img.shape)
# prints:
# (768, 256, 3)
# convert label tensor to numpy ndarray
pts = lbl.detach().cpu().numpy().reshape(-1, 1, 2)
pts = np.rint(pts).astype(np.int32)
print([pts])
# prints:
# [array([[[ 17, 153]],
# [[153, 154]],
# [[159, 692]],
# [[ 14, 691]]], dtype=int32)]
# draw lines between the vertices in lbl
cv2.polylines(img, [pts], True, (0, 1, 1))
# show the image with matplotlib.pyplot
plt.imshow(img)
plt.show()
However polylines gives an error:
---> 36 cv2.polylines(img, [pts], True, (0,255,255))
37 plt.imshow(img)
38 plt.show()
TypeError: Expected Ptr<cv::UMat> for argument 'img'
How can I draw lines on this image and then display it in jupyterlab?
python 3.7, opencv 4.4 (same behaviour in 4.2, 3.4)
Additional:
If I create a new blank numpy ndarray of matching size and type I can actually draw on it and display it, but I can only display, but not draw on the one I get from a tensor.
blank_img = np.zeros((768, 256, 3), np.float32)
print(type(blank_img))
print(blank_img.shape)
print(blank_img[:5, :5, 0])