Shared_memory for Mesh object : Pytorch3d

I was curious as to how to do this?

import torch.multiprocessing as mp
from pytorch3d.ops import sample_points_from_meshes
def Sample(mesh_obj,n_points,device):
  return sample_points_from_meshes(mesh_obj,n_points).to(device)
def creator_parallel(mesh_obj:Meshes,n_points:int,length:int):
  num_of_processes = length
  mesh_obj.share_memory() #main requirement
  processes = []
  for rank in range(num_of_processes):
    p = mp.Process(target=Sample,args=(mesh_obj,n_points,device))
    p.start()
    processes.append(p)
  for p in processes:
      p.join()
  
creator_parallel(trg_mesh,1000,50)

Are you running into a lack of GPU memory in the Google Colab?

Yes, @albertotono. That’s the case plus I’m wanting to explore Multiproc ways to do this.

Ok. in your task you are trying to use multiprocessing for transforming Pytorch3d meshes to points.

In the Hogwild implementation it has been mainly used for training the model like also reported here.

When a Tensor is sent to another process, the Tensor data is shared. You can also change torch.Tensor.grad to not None , to share it.

But those implementations are focused on Models and training procedure inside the model…so I will implement directly there.

Also this I hope it helps