Create a correct Tensor shape for BraTS

I’m extremely new to deep learning so please forgive if this question is far to dumb or doesn’t make any sense.

But, I have download the BraTS data-set. And these images come in format (240,240,155). So the images are 240x240 in height x width and there are 155 slices per image. There are also 4 masks. I have converted all of these to numpys. And I now want to create a simple UNet.

So I would to like to create a function that has the following parameters:

def simple_unet_model(IMG_HEIGHT, IMG_WIDTH, IMG_DEPTH, IMG_CHANNELS, num_classes):
    #Create Tensor with parameters IMG_HEIGHT, IMG_WIDTH, IMG_DEPTH, IMG_CHANNELS
 

So I would like to create this input tensor. And, when I look at its size or shape. I would like to get (240,240,155,1,3).
The 1 is there (I think) since I’m only working with T1-images and they are all in grayscale. Is this possible in PyTorch?

Again, Sorry if this question is bad, but maybe someone understands me.

Yes, creating this method would be possible and the more interesting part would be the model definition itself. You could check for popular UNet implementations in PyTorch and reuse them, if they would fit your use case.

Thanks for replying ptrblck.

I think I’m on the right track but I have a problem which probably is really simple, but I can’t seem to solve it.
I have a U-net now and I want to test it.
So i uploaded a 2D image of size (240, 240). However, my unet takes 4 arguments.
[batch_size, channels, height, width]. How can I add these two dimensions so when i take for example image.size() I get torch.size([1,1,240,240]).

You could either use unsqueeze or index the tensor with Nones (as is also done in e.g numpy):

x = torch.randn(240, 240)
print(x.size())
# > torch.Size([240, 240])

y1 = x.unsqueeze(0).unsqueeze(0)
print(y1.size())
# > torch.Size([1, 1, 240, 240])

y2 = x[None, None, :, :]
print(y2.size())
# > torch.Size([1, 1, 240, 240])

Thank you for all your help!