I’m currently trying to work on a project which I’m working with point clouds that were created with a LiDAR scanner. I know that occupancy grids seems to be popular and straightforwards to work with, but they require an extremely large input, since every possible point has to be either marked as filled or empty. However, I could also use depth map, but since it is a flat image, where the grayscale color represents, but this isn’t true 3d data, more like 2.5d. First, would it be useful to convert a 2d input layer that takes a depth map, to a 3d convolution, and would this help solve my issue of not having true 3d data? Second, how would this be implemented, and could someone please link an example of how to upscale a 2d convolution to a 3d convolution? Third, is there any other efficient method that can be used to convert a 3d point cloud into a true 3d format, that can be fed into a 3d convolutional neural network?
I have also been working on (synthetic) point clouds recently.
There is this library that works with Pytorch.
Thank you for the reply. So are you saying the repo that you linked to make working with depth map and occupancy grids easier, or are you recommending that I use a feature, like graph CNNs for my project?
I’m not sure about the specifics of your project. But from what you described, I feel like what you want is to un-project the depth map (using whatever camera parameters) to a 3D point cloud. If that is the case, I think you would want graph CNNs / pointnet / pointCNN.
So basically, I’m trying to upscale point clouds. So could I use a graph CNNs to convert a point cloud list to a 3d cnn layer, and use a standard image upscaling network, but instead, use 3d cnn instead of 2d cnn. Also, I know that graph CNNs are good for non euclidean data, like social networks, but why would graph CNNs be useful to covert a point cloud list to a 3d CNN, given that point clouds are euclidean data?