Help with 3D point cloud data

I am trying to implement DeepVCP paper and for that I need to load KITTI dataset. I have downloaded KITTI odometry data, but I am stuck on how to proceed from there. This is my first time working with 3D data so I am very confused.

In the extracted folder there is a calib.txt file which has to be used in some calibration. Then there are image2 and image3 folders that contain the left and right images. And finally, the velodyne folder containing the bin files for the point clouds.

Ques: Do I need to use both the left and right images? Or do I have to combine them into a single image.

I checked pykitti repo. But I was not able to make much progress. The thing that is confusing me right now is how to use the calibration file and what to do with the left and right images.

Has anyone else worked with this dataset? Can you share some code samples on how to create a dataloader for such a task?

1 Like

Hi Kushaj, I am also interested in using KITTI point clouds in PyTorch. Have you made any progress on this?
By the way for DeepVCP you need veloyne point cloud not images.

Best regards
Arash

Hello,

I have also the same question as it is my first time working with 3D data. I would like to use a Dataloader for KITTI Velodyne folder to be used for DeepVCP which has a pointnet++ as it’s first layer of of the model. Any help would be appreciated.

Thank you!

@Kushaj, @sepehrfard
you can take a look at pointseg or DeepLIO in both projects 3D Lidars are projected to images using spherical projection.

1 Like