Hello everyone!
Currently I’ve started reading the paper of name “CenterNet: Objects as Points”. The general idea is to train a keypoint estimator using heat-map and then extend those detected keypoint to other task such as object detection, human-pose estimation, etc. But the thing that confused me is how to splat the ground truth keypoint onto a heat-map by using Gaussian kernel.
So the reference without comment how they do something different would seem to indicate that they do something very similar, i.e. I expect them to see how much you can move the bounding box (keeping the size the same, if they only consider the center, I guess) while keeping IoU >= 0.7. Without some thought, I wouldn’t know whether axial or some diagonal translation is crucial, but you could experiment with that. This would be the radius. Then you could take 1/3 like in the reference or so.
Whether that is exactly what they do, I don’t know. However, I would expect to get radii that look somewhat similar to the ones pictured.
Thanks for you quick reply!
Thomas I checked the code of the CornerNet and it used a formula called unormalized gaussian2d, I googled but didn’t find out the result. Do you know where to derive that formula?