Tcmalloc Output when training with large dataseta

I currently have this output tcmalloc: large alloc 1073741824 bytes == 0x21728c000 @ 0x7f6508e9fb6b 0x7f6508ebf379 0x7f64b9d8c74e 0x7f64b9d8e7b6 0x7f64f47f8d53 0x7f64f417354a 0x7f64f44cdc0a 0x7f64f44f5803 0x7f64f467bb14 0x7f64f47b84ee 0x7f64f4211976 0x7f64f4212b30 0x7f64f44cfb09 0x7f64f3d4e249 0x7f64f4668ae8 0x7f64f45748a5 0x7f64f421441b 0x7f64f47047d8 0x7f64f3d4e249 0x7f64f4668ae8 0x7f64f45749f5 0x7f64f5b48997 0x7f64f3d4e249 0x7f64f4668ae8 0x7f64f45749f5 0x7f650438e30e 0x50a4a5 0x50cc96 0x508cd5 0x594a01 0x59fd0e ^C when training on my full dataset. The dataset is a volumetric CT scan from which I extracted its slices for segmentation on 2D UNet model. I am using Colab for the setup

This should be a warning not an error, which informs you about large memory allocations, which might be expected in your use case.
You could change the TCMALLOC_LARGE_ALLOC_REPORT_THRESHOLD env variable to a higher value in case you want to remove these warnings (it should be set to 1GB by default if Iā€™m not mistaken).

1 Like