Why same notebook allocating large different vram in two different environment?

you can see this notebook is trainable in kaggle using kaggle’s 16gb vram limit : G2Net GPU Newbie | Kaggle

i just tried to run this same notebook locally on rtx3090 gpu where i have torch 1.8 installed and same notebook allocating around 23.3 gb vram,why is this happening and how can i optimize my local environment like kaggle? thanks