Efficient scattering a high resolution point cloud of features over a low resolution voxel grid

Hi,

I have a point cloud of size N where for each point I have a feature vector of size C and 3D location (x,y,z), so I can represent this point cloud with two tensors of size N*C and N*3. Now I am trying to scatter this point cloud over a lower resolution voxel grid of size V*V*V so that I can get the feature volume of size V*V*V*C. Since the voxel grid is of lower dimension, there will be some voxels that receive multiple points while others might get None. (I make sure to translate my point cloud coordinates to voxel coordinates). Now to compute the feature volume first I instantiate a count volume of size V*V*V and a feature volume of size V*V*V*C with zeros. A straight forward way is run a for loop over all the points, identify which voxel the point belongs to, add the point feature to the voxel feature and increase the count corresponding to that voxel by 1. Once done, to get the final feature volume, the summed feature volume is divided by the count:

feat_vol = torch.zerors(V,V,V,C)
count_vol = torch.zeros(V,V,V,1)

for i in feat_pc.shape[0]:

feat_vol[coord_pc[i]] += feat_pc[i]
count_vol[coord_pc[i]] += 1

feat_vol /= (count_vol + eps)

However this would be very inefficient as the number of points increases and I am wondering whether there is a more efficient way for this.

I can run the following vectorized operations:

feature_vol[coord_pc] += feat_pc
count_vol[coord_pc] += 1

The issue here is that since there are multiple points sharing the same voxel (coord_pc is the same for multiple points) feature_vol and count_vol is only updated with one point which is not desired. Unless there is a similar concept such as manager in python multiprocessing, that can update the same underlying object from multiple inputs, this cannot work. I would appreciate any thoughts or suggestions.

Thanks