Is it possible to create a 3D gaussian distribution at a single point?

Hi,

I’m struggling with a dataset here as I only have the original 3D images of size 512*512*30. The dataset represents a series of bright spots in 3D space which is my area of interest. The spots are of varying sizes across the z-plane.

I want to train a network to segment these spots. But to do that I need a label or target volume for each image or input.

As for labels I only have a csv file, containing the x,y,z points of each bright spot. This is not passable as a label input for the network.

So my question is, how would I be able to create a label input for such an input? Would it be possible to create a 3D gaussian or bell curve at this point? This would be created in a zero array, thus making it a valid label array as the area of interest would be labeled 1 whereas the rest would be labeled zero.

Would this work? If yes would anyone know any python library that could make this?

Many thanks

Ive found this in the docs: https://pytorch.org/docs/stable/distributions.html#torch.distributions.multivariate_normal.MultivariateNormal

Will this help me achieve what I want? But how will I give it a location to draw the gaussian distribution at?

I’m not sure I understand what you’re after enough to tell which is the best approach.
So you have an array 512,512,30 points and labels of x,y,z which are the centers of the objects of interest? You don’t have label information about the “radius”/Variance but want to extract that from the image?

I’m asking because to make an image with Gaussians of the same variance around given x,y,z, you could just use scatter + convolution, but that doesn’t appear what you are after. How many points do you have 1, 10, 100? What is the range? Are all brightnesses the same? Or in other words if we consider superposition of gaussians, would they all have the same coefficient? Is it homogenous in the coordinates (i.e. circles rather than ellipses)?
To me it sounds like the first order of business might be estimating the covariance matrices and potentially weights. You could try to do that by creating a coordinate grid and superimposing the gaussians and minimize some error function, but I’m not sure it would work.

Best regard

Thomas

1 Like

Hi @tom
Many thanks for replying.

So you have an array 512,512,30 points and labels of x,y,z which are the centers of the objects of interest? You don’t have label information about the “radius”/Variance but want to extract that from the image?

Yes, that is correct. I have a 3D image of size height and width of 512 and having a depth of 30. The bright objects of interest are are all of the same size. All I have is their x,y,z location in a csv file. I cant use points as inputs to my model so thats why I figured one way to solve this would be to create a gaussian at the x,y,z point and then use torch.max() between a zero tensor and my gaussian tensor. This would give me a label or target tensor.

How many points do you have 1, 10, 100? What is the range? Are all brightnesses the same? Or in other words if we consider superposition of gaussians, would they all have the same coefficient? Is it homogenous in the coordinates (i.e. circles rather than ellipses)?

Per volume image I have about 50 points in the csv file. I am cropping a 32*32*30 block around these points in the dataloader. Yes, they all have the same brightness and they all have the same shape (circles).

Given, they all have the same variance maybe scatter+convolution would work, as you mentioned, but I’m not familiar with how to achieve this.

Many Thanks

Sorry for the delay.
So I used indexing because I can never figure out scatter for more than one dimension, but this is what I had in mind

import torch
import math
size = 100
coords = torch.randint(0,size, (50,3))
coords[:, 2] %= 30  # lazy trick, won't be uniformly distributed...

t = torch.zeros(size, size, 30)
t[coords[:, 0], coords[:, 1], coords[:, 2]] = 1
# now we have the peaks at the centres
# let's apply a Gaussian
# you want the kernel to be an odd number, but you can add something if you want
sqcoords = torch.arange(-4,5, device=t.device, dtype=t.dtype)**2
weights = (torch.exp(-(sqcoords[None]+sqcoords[:, None]))/(2*math.pi))[None, None]
t_gauss = torch.nn.functional.conv2d(t.permute(2,0,1).unsqueeze(1), weights, padding=weights.size(-1)//2).squeeze(1).permute(1,2,0)

Best regards

Thomas

1 Like

Many thanks for the reply tom, I tried your solution I get a message stating that I am requesting around 82GB of memory and I should probably buy more ram =/

That looks like something went wrong somewhere. The above shouldn’t use much more ram than any other convolution on your data.

Best regards

Thomas