Your session crashed after using all RAM(GOOGLE Collab)

I am using google collab(GPU) to run my model but as I create an object of my model, collab crash occurs. After analysing I find that it breaks down whenever I try to increase the size of the FC layer(Linear).
I am not getting why it crashes only on it.

My model contains layers such as:
1.Conv2d
2.maxpool
3.Conv3d
4.Linear layers
5.activations(leakyReLu)

But it only crashes due to change in Linear layer output parameters. I am not getting how all the memory could be taken by Linear layers (weights and bias).

How large are your linear layers?
If you post the shapes of your layers, we could approximate the memory usage, i.e. the parameters and all activations used during the training.

1 Like

My Linear layers(encoder) are as follows

1st layer(265*16*16,128*22*22)
2nd layer(128*22*22,256*22*22)
3rd layer(256*22*22,256*32*32)

i have to further pass output of 3rd layer to decoder layer for making a model for semantic segmentation.

Thanks for the shape information.
The weight matrices alone will take approx. 165GB.
If we ignore the bias, you can calculate the memory footprint of e.g. the last linear layer’s weight matrix using:

256*22*22 * 256*32*32 * 4 / 1024**3 = 121 GB

I assume you are using float32 values, which explains the multiplication with 4, as float32 needs 4 Bytes. The division by 1024**3 gives us the memory in GB.

2 Likes