leads to big increase in memory

I have some RAM issue when training a lot of models on the same machine. I have really small ones but I see a big increase in RAM memory when I use, device being a cuda:x. I managed to see this with memory_profiler. Here is the trace :

Mem usage    Increment   Line Contents
233.062 MiB    1.496 MiB           model = Network()
2260.094 MiB 1947.422 MiB           model =

Does anyone has a suggestion ?