It seems that most Autonomous Vehicles benchmarks for Lidar Segmentation have included the vegetation class. Example:
- Semantic Kitti: SemanticKITTI - A Dataset for LiDAR-based Semantic Scene Understanding
- NuScenes: https://www.nuscenes.org/nuscenes?externalData=all&mapData=all&modalities=Any
How is it possible that point cloud data can discriminate for vegetation? It seems to me that this is task is only viable for camera RGB segmentation.
For example, if the vegetation is overgrown onto a part of the road, how can the Lidar tell the differences between debris and vegetation? I am also curious about how can Lidar differentiate between terrain and sidewalk (+ is this important?).
What is the core purpose of Lidar Segmentation for AV? Besides drivable space, all other detections (ex: cars & humans) that it makes seem like it requires to be post-processed to 3D boxes to be of any use downstream (might else well do 3DOD). Are there any developers in this space (AV) deploying Lidar Segmentation model for any particular use cases?